source
stringlengths
62
1.33M
target
stringlengths
127
12.3k
Background UI Program Administration and Oversight As a federal-state partnership, the framework for the UI program is established by federal law, which sets broad requirements that state programs must follow, including categories of workers that must be covered by the program. While states design and administer their own programs, DOL’s Employment and Training Administration (ETA) is responsible for ensuring that state UI laws conform to, and state program operations comply with, federal law. For example, as required by the Social Security Act, ETA is to ensure state laws include provisions that allow for full payment of UI benefits when they are due. ETA also sets overall program policy, monitors state performance, and provides states with technical assistance. For example, ETA sets acceptable levels of performance and monitors states on measures related to benefits, program integrity, appeals, tax, and reemployment (see appendix II). ETA oversees state UI programs through its Office of Unemployment Insurance (OUI) and its six regional offices. The regional offices are the states’ main points of contact with DOL and serve as a link between the department and the states for providing technical assistance and clarifying program policies, objectives, and priorities. State UI Programs, Claims Processes, and Benefits While federal law sets forth broad provisions for the categories of workers that must be covered by the program, some benefit provisions, and certain administrative requirements, the specifics of regular UI benefits are determined by each state and the District of Columbia. This results in essentially 51 different programs. States administer their own programs and have considerable flexibility to set benefit amounts and their duration, and establish eligibility requirements and other program details. States are also responsible for customer service in their UI programs. For a general overview of the process for filing UI claims, see fig. 1. Typically, eligible unemployed workers can receive UI benefits for up to 26 weeks in most states. In addition to state UI benefits, the Federal- State Extended Benefit Program provides additional weeks of benefits during high and rising unemployment. UI Program Funding UI programs are generally funded by federal and state payroll taxes levied on employers. State benefits for unemployed workers are primarily financed by state employer payroll taxes and are placed in a trust fund that the federal government maintains on behalf of states. Ideally, states build reserves in their trust fund accounts through revenue from employer taxes during periods of economic expansion in order to pay UI benefits during economic downturns. Federal taxes paid by employers are used to fund the costs of administering UI programs in all states, among other purposes. As part of the President’s Budget, DOL uses a combination of national claims-related workload projections and other factors to develop the request for UI administrative funding for states. After Congress appropriates funds, DOL uses a formula to allocate the funding to states and considers state workloads estimates, as well as other information provided by states, such as cost accounting information. Since funding is calculated in part based on claims-related workloads, the federal funding available to states is generally sensitive to changes in total claims, with more funding available when claims increase and less when they decrease. States may also receive additional federal administrative funding above base levels on a quarterly basis when claims-related workloads exceed base funding levels. ETA may also award supplemental funds to states for special UI projects, such as supporting state efforts to reduce improper payments. In addition, states may provide additional state funding for the administration of their UI programs. UI Information Technology Modernization State UI programs rely extensively on Information Technology (IT) systems to carry out their UI program functions. Specifically, IT systems are used to administer the programs and to support related administrative needs. For example, these systems are used to support benefit eligibility determinations, record claimant filing information, and calculate benefit amounts. The majority of the states’ existing systems for UI operations were, however, developed in the 1970s and 1980s, and we previously reported that states face challenges modernizing these legacy systems to operate more efficiently. These challenges included limited staff with the technical and project management expertise to manage system modernization efforts, limited funding for modernization efforts, and limited resources for operating legacy systems while implementing modernized systems. Customer Service Practices and Standards In recognition of the importance of service delivery in effective governance, in April 2011 the White House issued Executive Order 13571—Streamlining Service Delivery and Improving Customer Service— which required federal agencies to, among other things, work with the Office of Management and Budget (OMB) to develop a customer service plan for federally-administered programs, establish mechanisms to solicit customer feedback, and improve customer service by adopting best practices. Agencies’ plans were required to address approximately three to five of their highest volume services. In response, DOL developed a customer service plan which includes workers in federal contracting, worker safety and health, and worker rights; however, it does not cover the UI program because it is administered by states. According to DOL officials, while agencies were not required to update their plans and DOL has not done so, a committee report accompanying the Consolidated Appropriations Act, 2016, directed OMB to report in March 2016 on agencies’ progress in developing customer service standards and incorporating them into their performance plans. In a 2010 report on federal agency customer service standards, we identified several examples of quality customer service, including understanding customers’ needs and organizing services around those needs, offering self-service options, and providing citizens with the information necessary to hold government accountable for customer service performance. In 2014, we reviewed customer service standards for federally-administered programs at five federal agencies and found that none of the agencies’ standards included all the key elements of customer service standards, such as performance targets or goals and performance measures. Claimants Faced Challenges Filing Claims by Phone and Accessing Services in Other Languages, and Most States Collect Some Information on Claimant Challenges Claimants in Our Focus Groups Reported Experiencing Long Call Wait Times and Difficulty Using Automated Phone Systems Claimant Quote “It’s pretty much been the same, the same problems as in 2009 when I first applied, specifically, the wait times, trying to get a hold of someone taking so long. I usually am on the phone for 20, 30 minutes, get frustrated, hang up, call back another day.” Claimant Quote “So then I call them on Monday, but I can’t get through until Wednesday or Thursday or Friday or sometimes the whole week.” sometimes over days or weeks—before reaching a representative, or said it took several days for a representative to call them back or respond to their email messages. For example, claimants told us that when they missed a call from a representative, it was difficult to reach them to return the call. One claimant reported making 15 attempts to leave a message because a representative’s voice mailbox was full. Claimant Quote “So you’ve already got three prompts before you even get to one that will allow you to ask questions. And then that prompt has two or three prompts. So you probably go through six before you get to a line that says we have a heavy volume and we can’t service you now and click.” In all six focus groups, claimants told us they had difficulty using automated phone systems, and in five groups, claimants said they were unable to reach program representatives by phone. Claimants said they had difficulty using automated phone systems because phone menus were long and complex, involving numerous steps, or because it was easy to make mistakes and difficult to fix them. For example, claimants told us that if they selected the wrong option from the menu, they were forced to start over. One claimant said it took 40 minutes to file a claim after accidentally pressing the wrong button. In other examples, claimants told us they had to hang up—or the system ended the call—after they reached a “dead end” and were unable to return to the previous menu. In addition, in five groups, claimants told us they were unable to reach program representatives by phone. For example, claimants said that when they called to file a claim, they received a message indicating that the phone system was at full capacity and could not accept additional calls. Claimant Quote “I haven’t actually had anyone that has been mean. I have spoken to people that were stressed and overworked, but for the most part they were as helpful and kind as they could be under the circumstances that they had to work with. Especially when I was falling apart.” Claimant Quote “I would say maybe 70 percent of the time it’s somebody that is new and I feel like I am educating them.” one claimant said a representative gave helpful advice about the best day of the week to file a claim. However, other claimants said representatives seemed to lack knowledge or training, or that different representatives provided conflicting information. Also, in four of six groups, some claimants said representatives were courteous, while others said they were not. Claimant Quote “I felt like it was easy. I just found the website and put in pretty basic stuff and it pulled up all my employers that I have had and the earnings and everything was already in there- -I didn’t have to enter any of that. They already knew. And then I got the stuff in the mail a few weeks later, pretty quick.” In contrast, in all six focus groups, claimants who filed their claims online said it was generally easy to do so, though they occasionally experienced system disruptions. For example, claimants described the Internet filing process as “quick,” “user-friendly,” and “convenient,” and said they found certain features helpful, such as the ability to view their past and future benefit payments and print out information about their claims. In addition, claimants said they preferred filing online to filing by phone because it was faster, or because they were less likely to make mistakes. Less frequently, claimants in five groups told us that they experienced challenges filing online. For example, claimants said that some state UI program websites only accepted claims during business hours or were temporarily unavailable due to system outages. Claimant Quote “The online works well. The amount of information, if you actually spend the time to look through all of the information, it answers many more questions than they would have time for on the phone.” In all six focus groups, claimants had mixed opinions about the sufficiency of the information provided by their state UI programs. For example, in all six groups, some claimants reported that the state UI program website or written program materials provided answers to their questions—such as next steps in the application process, or the amount of their benefit payment and when they could expect to receive it. However, others reported that the UI program website or written materials did not answer their questions—such as how to calculate their quarterly wages, or what activities qualify as “work.” In five of six groups, claimants reported that these sources did not provide answers to questions about their unique or complex situations, such as how medical leave or severance pay affected their claims, or how to report income from multiple jobs. Advocates Reported Individuals with Limited English Proficiency and Other Special Populations Have Had Difficulty Accessing the Program In addition to the challenges reported by recent UI claimants, advocacy groups told us that claimants with limited English proficiency have experienced challenges accessing translated materials and program representatives who speak their language when filing claims. For example, in one state, advocates told us that program materials are available in a limited number of languages, despite the many languages spoken in the state. In addition, advocates in two states said that program representatives speak a limited variety of languages, which can make it difficult for claimants to provide additional information after they file claims. Advocates noted that these challenges can be particularly difficult for claimants who speak languages that are less common in their states. For example, in one state, advocates said that an Arabic-speaking claimant did not receive benefits for over 3 months because of delays he experienced in receiving translated application materials and receiving an interpreter for his appeal hearing. In addition, advocacy groups in two states told us that the quality of translated materials—as well as the quality of interpretation—is sometimes poor, which can contribute to benefit delays or result in erroneous determinations that claimants are not eligible for benefits. For example, advocates in one state said that program representatives have sometimes incorrectly translated the reason for a claimant’s separation from employment, which has resulted in claimants being deemed ineligible for benefits. Furthermore, advocates in two states noted that when claimants with limited English proficiency appeal these eligibility determinations, the interpretation provided at the hearings is sometimes of poor quality, and that it is sometimes difficult to even find an interpreter. In addition to language challenges, advocacy groups in two states told us that special populations of claimants, such as individuals with limited English proficiency and individuals with disabilities, also face challenges filing claims over the Internet or by phone because it can be difficult for them to communicate without face-to-face contact. Given these challenges, advocates said it is important for state UI programs to ensure that these special populations and individuals without access to computers or phones are able to file claims and ask questions in person. In our survey, most states reported that they have taken various steps to make their UI programs accessible to special populations, for example, by offering assistance in languages other than English and by making their websites accessible for individuals with disabilities. Most States Reported Collecting Some Information on Customer Service Challenges Experienced by Claimants Most states collect some data on the extent to which UI claimants experience customer service challenges when filing claims by phone and over the Internet. In our survey of state UI programs, 39 of the 43 states that allow unemployed workers to file for UI benefits by phone (91 percent) reported that they collect some related customer service information, such as whether claimants are able to reach program representatives by phone in order to file claims or ask questions. For example, 38 states (88 percent)—including the three we visited—reported collecting data on the total number of calls answered by program representatives and average call wait times. In addition, 26 states (60 percent)—including two of the states we visited—reported collecting data on the number or percentage of calls dropped by automated phone systems (see table 1). As noted above, long call wait times, difficulties reaching program representatives by phone, and difficulties using automated phone systems were cited as challenges by claimants in most of our focus groups. Furthermore, of the 48 states that allow unemployed workers to file claims over the Internet, 39 (81 percent) collect customer service data for these claims. Over two-fifths of states (21 states, or 44 percent) reported monitoring how often claimants are unable to complete filings due to system disruptions, such as website timeouts or crashes, and over half (28 states, or 58 percent) conduct usability testing for their UI program websites. While some states reported using this data to make changes to customer service, officials in two states we visited said that funding constraints may make it challenging to implement these changes. In our survey, 28 of the 43 states that allow claims to be filed by phone (65 percent) reported that they have used data on these claims to make changes to customer service for UI claimants. For example, officials in one state we visited said that they use the data to adjust the number of program representatives available to answer calls and to forecast future claim volumes. In addition, 32 of the 48 states that responded to our survey (67 percent) reported collecting feedback directly from claimants. The two most commonly reported methods for soliciting feedback, according to the states we surveyed, was through surveys and the UI program website. For example, program officials in one state we visited told us they conduct annual and ad hoc surveys of claimants to assess their experiences in filing claims. Officials we met with in another state told us that while they do not collect systematic feedback from claimants, they do solicit informal feedback via social media and their website. Many States Reported Facing Challenges, and Most Have Taken Some Steps to Improve Customer Service, Such as Increasing Self-Service States responding to our survey reported facing various customer service challenges—such as staffing, outdated IT systems, funding constraints — which may help explain some of the challenges reported by claimants in our focus groups. While these issues were reported by many states, they were generally reported by fewer states during the last 12 months than during the recent recession. Many States Reported Facing Challenges Related to Staffing, Outdated IT Systems, and Funding Constraints Staffing Challenges Although 43 states reported that claims can be filed by phone, 4 of these states did not respond to questions about phone-related challenges in our survey. 48) cited staff turnover as a challenge to at least a moderate extent during the last 12 months—compared to the 27 states (56 percent) that reported facing similar challenges during the recession. Staff retention may reflect the complexity of the job, which was noted by UI officials in two states we visited. Also, 12 of 48 states (25 percent) reported staff training and expertise as a challenge to at least a moderate extent in the last 12 months, as compared to 28 states (58 percent) during the recession. While fewer states reported that staff training was a challenge recently than during the recession, it remains challenging due to the complexity of the job and the length of time required to adequately train new hires, according to officials in two states. Outdated IT systems are another challenge for many states, according to our survey. Specifically, 29 of 48 states (60 percent) reported that their current IT systems have significant limitations. These limitations can include legacy mainframe technology from the 1970s and 1980s or outdated programming languages that have implications for state programs’ ability to efficiently process claims and serve claimants. Moreover, 26 states (54 percent) reported that their IT systems were a challenge to a large or moderate extent during the last 12 months, compared to the 34 states (71 percent) that reported these systems were a challenge to at least a moderate extent during the recession (see fig. 3). In the aftermath of the recession, according to state UI program officials, frequent reprogramming of their IT systems was required, in part, to reflect changes in a temporary federal UI benefit program. Additionally, officials in one state we visited explained that their outdated system continues to present challenges because UI program staff must check multiple systems for information on claims, which can lead to errors in processing claims. Additionally, while several states that responded to our survey reported that they were planning to modernize their IT systems, officials we spoke to in all three of our site visit states explained that they faced challenges in fully modernizing their systems. In our survey, 19 of 48 states (40 percent) reported that they have implemented a modernized IT system. Of the remaining 29 that had not yet done so, 16 indicated that their state plans to implement a modernized IT system by 2019. However, officials in three states we visited identified challenges in implementing fully modernized systems—primarily, federal administrative funding constraints. In these states, in the absence of full modernization, officials told us they responded by making modifications to selected capabilities within their existing systems or that they were considering adopting practices from other states that have upgraded their existing systems. A larger number of states that described their current IT systems as having significant limitations reported other challenges than did states with modernized systems. For example, a larger number of states reporting IT system limitations cited call handling times as a challenge to at least a moderate extent during the recession than did states with modernized systems. Officials in one state without a modernized system told us that because claimants cannot check status updates and other information online, they must phone the call center, which consumes significant staff resources for routine requests that a modernized system could handle easily. Challenges Related to Federal Funding Constraints Federal Legislative Changes to Temporary Benefits Program During the recession, federal legislative changes included relatively frequent changes to a temporary federal benefits program. In GAO’s survey, 39 states (81 percent) cited federal legislative changes as challenges to a large or moderate extent during the recession. Comments from State Unemployment Insurance Program Officials Related to Federal Legislative Changes: State A: “…the excessive programming changes…caused program issues in often unrelated areas of the system which were difficult to diagnose (result of a very old and fragile system). This often exacerbated the difficulty in meeting service demand and also created additional frustration (for) both staff and the public.” State B: “(the temporary federal UI benefit program) was constructed… and administered…in a manner certain to create challenges for applicants and therefore for state programs. We never really had problems with the volume of actual applicants, but rather with the unnecessary volume of phone calls and administrative transactions…” State C: “(the temporary federal UI benefit program) was difficult to administer due to the large number of law changes and their retroactive effective dates.” Federal funding constraints was one area cited as a challenge to a large or moderate extent by more states during the last 12 months (28 of 48 states, or 58 percent) than during the recession (24 states, or 50 percent), as shown in fig. 4. All states rely on federal funding for the administration of their UI programs, and the total federal funding available to states has declined to pre-recession levels. Officials in one state we visited explained that after the recession ended, claim volumes decreased and, consistent with the funding model DOL uses to allocate funding to the states, federal administrative funding provided to the state was reduced. However, according to officials in this state, other aspects of UI program administration have remained constant or increased, such as efforts to identify and collect benefit overpayments, which also rely on federal funding. Moreover, in our survey, half of states said their UI programs receive no additional state administrative funds (24 states, or 50 percent), and nearly one quarter reported receiving less than 5 percent of their total administrative funds from their states (11 states, or 23 percent). While officials in all 3 states we visited cited limited federal administrative funding as a challenge, only one of these states requested and received additional state administrative funds. Representatives from an advocacy group we spoke with in that state noted that the additional funds, which the state used to hire more call center staff, resulted in a significant reduction in blocked calls. Some States Reported Facing Challenges Related to Claims Filed by Phone and Claims Processing Delays Challenges Related to Claims Filed by Phone Some states reported experiencing challenges related to claims filed by phone to a large or moderate extent, both during the recession and during the last 12 months (see fig. 5). According to a DOL regional office official and officials in one state we visited, challenges related to claims filed by phone are likely due to staffing issues. For example, if claimants are unable to reach program representatives by phone within a reasonable amount of time, it may indicate that the UI program has an insufficient number of call center staff. Also, 18 of the 39 states (46 percent) who responded in our survey about claims filed by phone reported that call handling times remain a challenge to at least a moderate extent. Officials in one state we visited explained that as the state has made it easier for claimants to manage their own claims, calls to the state UI program, including claimant calls directed from American Job Centers, are becoming more complex and may take longer to address. Officials in that state told us that call handling times remain challenging because program representatives have to balance the competing goals of completing calls quickly and meeting customer needs. In addition, 13 of 39 states (33 percent) reported continuing challenges related to calls abandoned by callers, and 8 of 39 states (21 percent) reported continuing challenges with calls dropped by the automated phone system, to at least a moderate extent. In addition, some states described challenges related to claims processing delays or backlogs. For example, 16 of 48 states (33 percent) cited delays or backlogs in claims processing during the last 12 months as a challenge to a large or moderate extent, while these issues were challenges for 34 states (71 percent) during the recession. Although claim volumes have generally declined, since federal administrative funding is tied, in part, to the volume of claims, some states may face challenges in this area. In one state we visited, for example, officials said they feel understaffed despite having a lower workload, particularly when there are unexpected spikes in claim volumes due to seasonal unemployment fluctuations or changes in economic conditions, for example. Many States Reported Providing or Planning to Provide Self-Service and Other Practices to Ease Customers Claims Filing Experiences Many states reported that they took or are planning to take some actions to improve customer service, such as providing self-service options on state UI program websites and implementing automated telephone systems (see fig. 6). For example, almost all of the 48 states responding to our survey reported having provided self-service options on the state UI program website (94 percent) or implementing an automated telephone system (88 percent), both of which are designed to allow claimants to file their own claims with little or no staff assistance. Other commonly-reported practices included posting “Frequently Asked Questions” on state UI program websites (94 percent) and providing customer service training to call center staff (77 percent). Other practices, though potentially helpful, have not been implemented as frequently. For example, 22 states reported having implemented virtual hold or courtesy call back on their phone lines, which allows claimants to opt out of holding and instead receive a call back, and an additional 9 states said they planned to do so. Several states have also undertaken efforts to streamline program processes to improve efficiency. For example, officials in California cross- trained program staff so that all staff could answer incoming calls, and reduced call center hours from a full business day to a half day. Prior to this change, some call center staff answered calls while others worked on eligibility or payment issues throughout the day. As more staff are cross- trained in all functions, their competencies to answer all types of calls are enhanced, according to officials. Officials told us that these changes allowed them to answer more calls than with their previous staffing model in which staff specialized in different tasks. Officials described the changes as helping to create additional capacity in an environment of constrained resources. Additionally, in our survey, several states described efforts to streamline or modify their processes to improve services. DOL Provides States with Monitoring and Assistance on Some Aspects of Customer Service, and Has Taken Steps to Share Successful State Practices DOL’s Monitoring and Assistance Efforts Address Some Aspects of Customer Service DOL’s ETA monitors states on a range of UI performance measures, some of which directly address certain aspects of customer service. Under the federal-state partnership, ETA sets overall program policy and monitors state performance, and states provide customer service to claimants. ETA sets acceptable levels of performance and monitors states on a total of 15 core measures within the following categories: 1) benefits, 2) program integrity, 3) appeals, 4) tax, and 5) reemployment. (For a comprehensive list of core performance measures, see appendix II.) According to ETA officials, while most of these measures indirectly address customer service issues, four of them directly address certain aspects of customer service. Specifically, ETA measures the timeliness of first payments to eligible claimants and of appeals decisions and assesses the accuracy of two types of non-monetary eligibility determinations. Officials told us they consider these measures to be important indicators of customer service (see table 2). ETA provides states with technical assistance for the UI program overall, and provides technical assistance on customer service under certain circumstances. According to ETA officials, ETA’s six regional offices monitor state performance on a quarterly basis and provide technical assistance—which may address customer service—if states fail to meet acceptable levels of performance over a prolonged period of time. For example, if a state fails to pay benefits to claimants in a timely manner, ETA officials said they may review the state’s call center operations and provide related assistance. Officials told us they have also provided technical assistance when states have faced major customer service challenges that are not addressed in the performance measures, such as significant delays in answering calls. In addition, ETA provides ongoing technical assistance to states on program administration, which has occasionally addressed customer service issues. For example, officials said they have developed training, hosted webinars, provided supplemental funding, and maintained an online community of practice through which states can share information. According to officials, ETA is currently piloting an effort to ensure states are routinely assessing program operations and processes, complying with applicable laws, and effectively administering their UI programs. As part of this effort, officials expect that beginning in fiscal year 2017, states will conduct annual self- assessments of their benefit processes—including reviews of their call center operations and supporting IT infrastructure—and ETA will provide related assistance, such as sharing best practices for areas in which states are experiencing challenges. ETA has also provided states with technical assistance or guidance on IT modernization, staffing issues, and program access, which were reported as customer service challenges by states and advocacy groups. ETA provides states with technical assistance on IT modernization by funding and overseeing the Information Technology Support Center (ITSC), which is operated by the National Association of State Workforce Agencies (NASWA). For example, in March 2015, ETA and ITSC issued a checklist to help states prepare to launch modernized UI IT systems. ITSC has also developed a comprehensive guide for UI IT modernization projects, among other resources. In addition, ETA has provided states with supplemental funding to establish consortia in which states work together to develop and share a common, modernized IT system. With respect to staffing issues, ETA officials said they have provided technical assistance under certain circumstances, such as when states have been unable to adequately staff their call centers due to high volumes of claims during the recent recession or the implementation of modernized IT systems. Regarding program access, in October 2015, ETA issued guidance directing states to ensure that all individuals—including individuals with limited English proficiency, individuals with disabilities, and older individuals—can effectively access the UI program. Several States Reported They Could Benefit From More Information on Other States’ Successful Practices, and ETA Plans to Share Such Practices on an Ongoing Basis Officials in all three states we visited told us that they have consulted with other states to learn from their experiences administering the UI program. For example, officials said they have consulted with other states about process improvements and online chat capabilities, among other issues. In two states, officials told us that they have contacted other states directly because they have sometimes been unable to obtain timely responses to questions posted on ETA’s online community of practice. In one of these states, officials explained that it can be challenging for staff to monitor the online community of practice on a regular basis and respond to other states’ requests for information in a timely manner. In a third state, officials said that they have contacted other states directly because officials are more willing to openly share challenges and lessons learned in private discussions. In addition, officials told us that NASWA has helped them gather information about other states’ UI programs. For example, officials in one state said NASWA helped them survey all states about their claims workloads and staffing, among other issues, which helped them determine that they have more claims per staff member than other states. While states are sharing some information, officials in all three states said their UI programs could benefit from more information about other states’ successful customer service practices. For example, officials in all three states said it would be helpful to have more information—beyond ETA’s recent guidance—on successful practices for serving special populations, such as limited English proficiency claimants and those with disabilities. In particular, officials in all three states said that their programs could benefit from more information about other states’ successful practices for addressing ongoing challenges, such as insufficient staffing and IT limitations for states that have been unable to implement fully modernized systems. ETA has taken some steps to help states share successful customer service practices, and plans to continue to help states do so—including those that the states we visited said would be helpful. According to ETA officials, one of ITSC’s core activities is to collect lessons learned from state IT modernization efforts and disseminate them to states, including successful practices for partial modernization when states are unable to fully modernize their systems. In addition, ETA regional offices have helped states share successful customer service practices through ongoing technical assistance, including periodic conference calls and conferences, and by connecting states facing challenges with other states that have successfully addressed similar challenges. Furthermore, ETA is currently in the process of conducting a national study of UI call center operations which officials expect will identify best practices on a range of issues, including staffing. Officials also expect that ETA’s efforts to comprehensively assess state UI program operations and processes will identify best practices in a range of areas related to customer service, such as call center operations, IT infrastructure, and program access. When these efforts are completed, ETA plans to share the best practices it identifies with all states it its online community of practice. Agency Comments We provided a draft of this report to the Department of Labor for review and comment. The agency provided technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Please contact me on (202) 512-7215 or at brownbarnesc@gao.govif you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found of the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology We examined (1) the customer service challenges, if any, recent Unemployment Insurance (UI) claimants have faced, and the extent to which states collect information on claimants’ challenges, (2) any challenges states have faced in providing customer service to UI claimants, and any improvements they have made, and (3) the extent to which Department of Labor (DOL) monitors states’ customer service efforts and provides assistance to help them make improvements. To address our objectives, we reviewed relevant federal laws, regulations, and guidance; conducted a survey of state unemployment insurance (UI) programs; interviewed DOL and state UI programs officials; conducted site visits to 3 states; and held 6 focus groups with recent UI claimants in all 3 site visit states. We also interviewed various stakeholders, including representatives of national associations and advocacy groups that represent UI claimants. We conducted this performance audit from November 2014 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Survey of State UI Programs To address all three objectives, we conducted a web-based survey of UI programs in all 50 states and the District of Columbia. The survey included questions about available methods for filing UI claims in each state, the status of information technology (IT) systems modernization in each state, the data states collect related to customer service, the challenges that states have experienced with respect to UI claims during the recent recession and during the previous 12 months, the practices that states have implemented to improve customer service, and the views of state UI officials about the assistance that DOL has provided related to customer service. We received responses from 48 states, for a response rate of 94 percent. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with five state UI programs. To ensure that we obtained a variety of perspectives on our survey, we selected states with diversity on the following criteria: 1) Employment and Training Administration (ETA) region, 2) average unemployment rate, 3) UI claims volume, and 4) the status of IT modernization. In addition, we considered whether the state had been identified as employing certain practices related to customer service, such as conducting surveys of claimants. Based on feedback from these pretests, we revised the questionnaire in order to improve the clarity of the questions. An independent survey specialist within GAO also reviewed a draft of the questionnaire prior to its administration. After completing the pretests, we administered the survey. On June 25, 2015, we sent an e-mail announcement of the questionnaire to state UI directors, notifying them that our online questionnaire was available for them to complete, and provided them with unique passwords and user names. To encourage state UI programs to respond, we followed up with non-respondents by phone and email through October 6, 2015. We also followed up by email with state UI directors to clarify their survey responses. We collected responses through February 2016. Analysis of Survey Responses and Data Quality We used standard descriptive statistics to analyze responses to the questionnaire. Because we surveyed all states, the survey did not involve sampling errors. To minimize non-sampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the questionnaire with state UI programs to minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. To reduce nonresponse, another source of non-sampling error, we followed up by email with states that had not responded to the survey to encourage them to complete it. In reviewing the survey data, we performed automated checks to identify inappropriate answers. We further reviewed the data for missing or ambiguous responses and followed up with states when necessary to clarify their responses. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. Site Visits and Focus Groups Concurrently with our survey, we conducted site visits to three states: California, New York, and Texas. We selected these states because they had the largest number of new UI claims in their respective regions in calendar year 2014, which was the last year for which data were available. We also selected these states because they are located in geographically diverse ETA regions. In each state, we interviewed state UI program officials, as well as officials at two ETA regional offices and organizations that advocate for UI claimants. The results from our interviews with state UI programs and ETA regional offices are not generalizable. In all three states, we conducted focus groups with recent UI claimants (see additional discussion below). We also identified and interviewed representatives of advocacy groups that represent UI claimants in our site visit locations. In part, we selected these groups to provide perspectives on the challenges faced by claimants with limited English proficiency and claimants with disabilities, who were not represented in our focus groups. The opinions expressed by these groups represent their points of view, and may not represent the views of all advocacy groups or the customer service experiences of all claimants with limited English proficiency and claimants with disabilities. In our interviews with state UI program officials, we asked about the data collected by the program, such as call handling times, and the ways the data are used; the extent to which and how the state UI program collected feedback from claimants; customer service challenges experienced by the state UI program; and practices to improve customer service. In our interviews with ETA regional offices, we asked about federal monitoring and technical assistance efforts and the extent to which they relate to customer service. In our interviews with advocacy organizations, we asked about the customer service challenges faced by state UI programs and by claimants, communications with claimants, access for special populations of claimants, ways in which the state UI program is currently working well, and state practices to improve customer service. To learn about recent UI claimants’ challenges related to customer service, we conducted 6 focus group sessions with a total of 58 claimants at 3 locations, using a contractor to recruit and screen participants and record and transcribe the sessions. In order to recruit focus group participants, we provided participant selection criteria to the contractor. Specifically, we stipulated that potential participants be 21 years of age or older, speak English, have personally applied for UI benefits within the 12 months preceding the time the contract was awarded, or from July 2014 to July 2015, and be able to provide written verification that they applied for UI benefits in their states. The contractor then contacted and screened potential participants from its database, and over-recruited a total of 15- 20 individuals for each session as necessary to ensure that 8-10 eligible individuals participated. We conducted these focus groups in September 2015. These sessions involved structured small-group discussions designed to gain more in-depth information about specific issues that could not easily be obtained from another method, such as a survey or individual interviews. Consistent with typical focus group methodologies, our design included multiple groups with varying characteristics but some similarity on one or two homogeneous characteristics. In all focus groups, the participants had filed UI claims within the last 12 months in the state where the group was held. Most participants said they had filed their claims online or by phone, although other filing methods were represented. Our overall objective in using a focus group approach was to obtain views, insights, and feelings of UI claimants who had filed claims within the last 12 months. Specifically, we wanted to learn about challenges they faced in filing claims, including their experiences with state UI program websites and phone lines, as well as their views about the courtesy and responsiveness of UI program staff, their thoughts about ways in which the state UI program is currently working well, their views about the timeliness of agency actions, their experience with information provided by the state UI program and opportunities to provide feedback, and their recommendations for improvement. By including UI claimants who had filed using different methods, and claimants who varied according to age, gender, ethnicity, and self-reported education level and income, we intended to gather a range of perspectives regarding state UI programs’ customer service efforts. All of the participants selected for the focus groups were fluent English speakers. We selected three cities as focus group locations. We selected these locations because they corresponded to the states we selected for site visits. We conducted two sessions in each of the three cities—Albany, New York; Austin, Texas; and Sacramento, California. Discussions were structured, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. We conducted one pretest focus group session in Rockville, Maryland prior to beginning our travel for the sessions. Each of the 6 focus groups was recorded and transcripts were created, which served as the record for each group. Those transcripts were then evaluated using content analysis to develop our findings. The analysis was conducted in two steps. In the first step, three analysts jointly developed a set of codes to track the incidence of various responses and themes during focus group sessions. In the second step, each transcript was coded by an analyst and then those codes were verified by two other analysts. Any coding discrepancies were resolved by all three analysts agreeing on what the codes should be. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, the information includes only the responses from recent UI claimants from the six selected groups. Second, while the composition of the groups was designed to ensure a range of age, gender, and ethnicity, the groups were not randomly sampled. Third, participants were asked questions about their experiences or expectations, and other UI claimants not in the focus groups may have had other experiences or expectations. Because of these limitations, we did not rely entirely on focus groups, but rather used several different methods to corroborate and support our conclusions. Appendix II: Unemployment Insurance (UI) Performance Measures and Acceptable Levels of Performance Appendix II: Unemployment Insurance (UI) Performance Measures and Acceptable Levels of Performance Performance Measure Benefits Measures First Payment Promptness: % of all 1st payments made within 14/21 days after the week ending date of the first compensable week in the benefit year (excludes Workshare, episodic claims such as Disaster Unemployment Assistance, and retroactive payments for a compensable waiting period). Nonmonetary Determination Time Lapse: % of Nonmonetary Determinations (Separations and Nonseparations) made within 21 days of the date of detection of any nonmonetary issue that had the potential to affect the claimant’s benefit rights. Nonmonetary Determination Quality- Nonseparations: % of Nonseparation Determinations with Quality Scores equal to or greater than 95 points, based on the evaluation results of quarterly samples selected from the universe of nonseparation determinations. Nonmonetary Determination Quality- Separations: % of Separation Determinations with Quality Scores equal to or greater than 95 points, based on the evaluation results of quarterly samples selected from the universe of separation determinations. Program Integrity Measures Detection of Overpayments: % of detectable, recoverable overpayments estimated by the Benefit Accuracy Measurement survey that were established for recovery. Benefit Year Earnings (BYE) Measure: % of the amount overpaid due to BYE issues divided by the total amount of UI benefits paid. Improper Payments Measure: % of UI benefits overpaid plus UI benefits underpaid minus overpayments recovered divided by the total amount of UI benefits paid. UI Overpayment Recovery Measure: % of Amount of overpayments recovered divided by (Amount of overpayments established minus overpayments waived) (example Improper Payments Information Act (IPIA) Reporting Year 2013 = July 1, 2012 – June 30, 2013) Appeals Measures Average Age of Pending Lower Authority Appeals: The sum of the ages, in days from filing, of all pending Lower Authority Appeals divided by the number of Lower Authority Appeals. Average Age of Pending Higher Authority Appeals: The sum of the ages, in days from filing, of all pending Higher Authority Appeals divided by the number of Higher Authority Appeals. Lower Authority Appeals Quality: % of Lower Authority Appeals with Quality Scores equal to or greater than 85% of potential points, based on the evaluation results of quarterly samples selected from the universe of lower authority benefit appeal hearings. Tax Measures New Employer Status Determinations Time Lapse: % of New Employer Status Determinations made within 90 days of the last day in the quarter in which the business became liable. Performance Measure Tax Quality: Tax Performance System (TPS) assessment of the accuracy and completeness of the tax program determined by scoring, on a pass/fail basis, samples of the 13 tax functions. Effective Audit Measure: Evaluates whether a state’s employer audit program meets or exceeds minimum levels of achievement in the following four factors: Factor 1 - % of Contributory Employers Audited Annually, Factor 2 - % of Total Wages Changed from Audits, Factor 3 - % of Total Wages Audited, Factor 4 - Average Number of Misclassifications Detected per Audit, and meets or exceeds a minimum overall score of the four factors. Reemployment Measure Facilitate Reemployment: % of UI claimants who are reemployed within the quarter following the quarter in which they received their first UI payment. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Cindy S. Brown Barnes, (202) 512-7215, brownbarnesc@gao.gov. Staff Acknowledgments In addition to the contact named above, Mary Crenshaw, Assistant Director; Divya Bali, Analyst-in-Charge; Daniel Berg, Caitlin Croake, Christopher Morehouse, and Betty Ward-Zuckerman made significant contributions to this report. Also contributing to this report were Jessica Botsford, David Chrisinger, Carol Henn, Jill Lacey, Kathy Leslie, Mimi Nguyen, Lisa Pearson, Carl Ramirez, Jerome Sandau, Almeta Spencer, Valerie Melvin, Margaret Weber, Charles Willson, and Charles Youman. Related GAO Products Unemployment Insurance: States’ Reductions in Maximum Benefit Durations Have Implications for Federal Costs. GAO-15-281. Washington, D.C.: April 22, 2015. Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service. GAO-15-84. Washington, D.C.: October 24, 2014. Information Technology: Department of Labor Could Further Facilitate Modernization of States’ Unemployment Insurance Systems. GAO-12-957. Washington, D.C.: September 26, 2012. Unemployment Insurance: Economic Circumstances of Individuals Who Exhausted Benefits. GAO-12-408. Washington, D.C.: February 17, 2012. Managing for Results: Opportunities to Strengthen Agencies’ Customer Service Efforts. GAO-11-44. Washington, D.C.: October 27, 2010.
UI benefits are a critical source of income for millions of unemployed Americans. Overseen federally by DOL and administered by states, the UI program requested $32.5 billion in benefits in fiscal year 2015 for approximately 7 million UI claims. During the 2007-2009 recession, states faced challenges processing record numbers of claims, and questions were raised about the quality of customer service. GAO was asked to review customer service issues in state UI programs. GAO examined (1) customer service challenges, if any, recent UI claimants have faced and the extent to which states collect information on claimants' challenges, (2) any challenges states have faced providing customer service to claimants, and any improvements they have made, and (3) the extent to which DOL monitors states' customer service efforts and provides assistance to help them make improvements. GAO surveyed state UI programs in 50 states and the District of Columbia (with 48 responding); interviewed officials from state UI programs and advocacy groups in California, New York, and Texas (selected for geographical diversity and large numbers of UI claims); and conducted six focus groups with recent UI claimants in these three states. Focus group results are not generalizable, but provide important insights into the experiences of some UI claimants. GAO also reviewed relevant federal laws and DOL guidance. In GAO focus groups of recent claimants filing for unemployment insurance (UI) benefits, those who filed by phone reported experiencing various challenges, such as long call wait times; however, those filing for benefits online reported that it was generally easy to do. In all six of the focus groups GAO conducted in the three states it visited, individuals who had claimed benefits by phone between July 2014 and July 2015 reported experiencing challenges with the state UI program's customer service. GAO defined customer service as including ease of program access, courtesy, timeliness, and accuracy, as well as responsiveness to customer needs and expectations. Some participants in all six focus groups reported experiencing long call wait times and difficulties using automated phone systems. In addition, in two of the states visited, representatives of advocacy groups reported that special populations, including individuals with limited English proficiency, had difficulty accessing the UI program. In GAO's survey of state UI programs, most states—including those GAO visited— reported collecting some data on customer service challenges for claimants. For example, 38 states reported collecting data on average call wait times. Many states reported challenges in providing customer service, including staffing issues, and most have taken some steps to improve customer service, such as increasing self-service options. Specifically, more than half the states GAO surveyed reported insufficient staffing, outdated Information Technology (IT) systems, and funding constraints, all of which could play a role in claimant challenges. Many states also reported that they have taken or are planning to take some actions to improve customer service. For example, 45 states reported taking action to provide self-service options, and 42 states reported taking action to provide customer service training to program representatives. The Department of Labor's (DOL) Employment and Training Administration (ETA) provides states with monitoring and assistance on some aspects of customer service. ETA monitors and measures state performance on the timeliness of benefit payments and appeals decisions. ETA also monitors and measures the accuracy of states' non-monetary eligibility determinations, such as whether states accurately assess reasons for claimants' separation from employment. In addition, ETA provides states with various technical assistance. ETA has provided states with assistance on IT modernization, staffing issues, and program access for special populations. UI program officials in the three states GAO visited said they could benefit from more information on other states' successful customer service practices, including practices for addressing continuing staffing and IT challenges. ETA plans to share these practices on an ongoing basis, officials said. For example, officials said ETA plans to share successful practices—including those related to staffing—obtained from its national study of call center operations through its online community of practice. ETA also plans to continue to collect lessons learned from state IT system modernization efforts and disseminate them to states.
Background ERISA and Internal Revenue Code Requirements for Form 5500 Private pension plans are governed by ERISA and the IRC, whose annual reporting requirements may generally be satisfied through filing a Form 5500 Annual Return/Report of Employee Benefit Plan (Form 5500) and its accompanying schedules. Any sponsor of an employee benefit plan subject to ERISA must file detailed information about each benefit plan every year. To satisfy the annual reporting requirements, DOL, IRS, and PBGC jointly developed and maintain the Form 5500. The form is the primary source of information collected by the federal government regarding the operation, funding, expenses, and investments of employee benefit plans. In 2011, plan sponsors filed close to 700,000 Form 5500 reports, covering more than 130 million active participants and more than $6 trillion in investments. The Form 5500 Return includes nine schedules and attachments that collect information on particular plan aspects and fulfill specific filing requirements including: plan investment and service provider fee information, plan financial condition, annual participant contributions, plan type (i.e., defined benefit or defined contribution), number of participants. For example, Schedule C requires plans to provide information on fees charged by select service providers. Schedules H and I require large and small plans respectively to provide information on plan investments. Plans are also required to include attachments, such as a Report of Independent Qualified Public Accountant. Form 5500 data are made publicly available. Given the form’s valuable information and its widespread use, we have conducted a number of reviews of it over the past few years, including examinations of the timeliness and content of the form, fee reporting, oversight of multiemployer plans, and the clarity of required disclosures. Federal Role and Other Form 5500 Stakeholders Form 5500 constitutes an integral part of the enforcement, research, and policy formulation programs of DOL, IRS, and PBGC—the primary federal agencies that use it to support their role in regulating and monitoring private plans. DOL’s Employee Benefits Security Administration (EBSA) uses the Form 5500 as a tool to monitor and enforce plan administrators and other fiduciaries and service providers’ responsibilities under Title I of ERISA. IRS uses the form to enforce standards that relate to such matters as how employees become eligible to participate in benefit plans, how they become eligible to earn rights to benefits, and the minimum amount employers must contribute. PBGC uses the form as a tool to inform its efforts to insure the benefits of participants in most private sector defined benefit pension plans and help carry out its mission under Title IV of ERISA to encourage the continuation and maintenance of private pension plans. In March 2014, the DOL Office of Inspector General issued an audit report examining whether EBSA was effectively overseeing the Form 5500 reporting process.agencies and Congress use information collected through the form to assess economic trends and develop policy initiatives. The Form 5500 is also a source of information and data for various nongovernmental stakeholders, who use it to assess employee benefit, tax, and economic trends and policies. Private plan sponsors are required to file annual reports concerning, among other things, the financial condition and operation of their plans. Form 5500 disclosure and reporting was intended to protect the interests of participants and their beneficiaries by, among other things, requiring disclosure and reporting to participants and beneficiaries of financial and other information. Agency Initiatives In 1999, DOL, IRS, and PBGC implemented a new computerized system called the ERISA Filing Acceptance System (EFAST) to improve processing of Form 5500 returns. Because there was no electronic filing requirement when EFAST was implemented, most forms were submitted on paper. In January 2010, DOL completed its effort to begin collecting Form 5500 information electronically, an initiative known as ERISA Filing Acceptance System (EFAST2). DOL contracted with an information technology firm to design, develop, test, implement, operate, and maintain the EFAST2 system. While the form and its attachments are collected electronically, attachments are not all required to be in a single structured, data-searchable format. The move to EFAST2 and the implementation of an electronic filing requirement have addressed some long-standing issues with the timeliness of Form 5500 data, but additional timeliness concerns have not been addressed. Specifically, in a 2005 report we raised concerns about the timeliness of Form 5500 information due to statutory reporting requirements that allow filers a substantial amount of time to file the Form 5500, which did not change with the implementation of EFAST2. DOL issued regulations revising the form in November 2007 in an effort to facilitate the transition to an electronic filing system; reduce and streamline annual reporting burdens; and update the annual reporting forms to reflect current issues, agency priorities, and requirements of the Pension Protection Act of 2006.expanded the fee and compensation disclosures of the Form 5500. According to DOL officials, these changes were made to increase transparency regarding the fees and expenses employee benefit plans pay. DOL officials explained that they also wanted to ensure that plan officials would obtain the information needed to assess the compensation paid for services rendered to the plan while taking into consideration revenue-sharing arrangements among plan service providers and potential conflicts of interest. Since the 2009 plan year, Schedule C requires plan sponsors to classify the fees they pay service providers as either “direct” or “indirect” compensation. These fees are separated into those the plan pays directly to a service provider and those that service providers receive from third parties (such as another service provider) or plan investments. Among other things, these changes DOL, IRS, and PBGC have established committees to work collectively on a number of initiatives related to the form. The tri-agency Form 5500 working group, which consists of representatives from all three agencies, meets monthly to propose updates and discuss how to implement updates provided by each agency that should be made to the Form 5500. Proposed updates include minor technical changes, changes to the instructions, and changes that must be made as a result of statutory or regulatory changes. This working group facilitates discussion between the agencies, but according to DOL officials it is not a decision-making body. Each of the three agencies decides independently whether to proceed with changes to the form. Recently the three agencies established the 21st Century Initiative to examine, among other things, the possibility of modernizing the form by making major, substantive revisions to all its parts. In addition, the initiative plans to examine the feasibility of streamlining the lengthy rulemaking process which generally requires DOL to take several steps, including obtaining review and clearance to publish by the Office of Management and Budget (OMB) and publishing a notice of proposed rulemaking in the Federal Register, providing interested persons the opportunity to comment on the proposed regulation, and publishing a final regulation. EBSA’s Office of Technology and Information Services is focused on, among other things, improving the flexibility of the next EFAST system contract when the current contract expires in 2020. In 2012, OMB issued a memorandum to provide guidance to agencies on testing and simplifying federal forms that collect information from the public. The memo further directed agencies to engage in advance testing of information collections— including lengthy and complex forms— where feasible and appropriate, in order to: (1) ensure that they are not unnecessarily complex, burdensome, or confusing; (2) discover the likely burden on members of the public (including small businesses); and (3) identify ways to make the form as simple and easy to use as possible. The memorandum further advised that advance testing should occur either before proposing information collections to the public or during the public comment period required by the Paperwork Reduction Act of 1995 (PRA), which sets forth the requirements that agencies must meet when collecting information. Stakeholders Cited Challenges with Plan Asset Reporting Format, Missing Information, and Inconsistent Data Form 5500 stakeholders identified challenges to the usefulness, reliability, and comparability of plan asset information. Through a two-phase online panel survey process, respondents first generated a wide range of challenges and potential changes related to Form 5500 plan investment information. Then, in the second phase of the survey, respondents rated those challenges and potential changes identified in first phase. The highest rated challenges fell into three categories: (1) challenges with the reporting format of plan asset categories, (2) challenges arising from missing key information, and (3) challenges with inconsistent information. Table 1 lists the top challenges respondents to the second phase of the survey rated as having a very or extremely significant impact to their work with the Form 5500. Challenges with Plan Asset Categories Some Form 5500 stakeholders stated that the plan asset categories on Schedule H are not representative of current plan investments, which has several consequences. Ten of 31 respondents noted that it’s a very or extremely significant challenge to break out plan assets on Schedule H differently than the investment industry typically reports this information. That makes it difficult to determine how to properly categorize investments based on the asset categories provided, according to a plan fiduciary. For example, large plans typically use an investment manager for each asset category (such as fixed income, equities, hedge funds, or private equity), but Schedule H’s plan asset categories are broken out by the type of investment vehicle (such as partnerships, collective trusts, pooled separate accounts, or mutual funds). Thus, it can be difficult to understand the categories, apply them, and disaggregate the data by investment vehicle rather than asset class. As one pension consultant said, the asset categories provide little insight into the investments themselves, the level of associated risk, or the structures of these investments. Furthermore, as one preparer pointed out, the trust reports and the audited financial statement do not match Form 5500 asset categories—forcing plan sponsors and their service providers to produce multiple sets of information. In addition, they often found inconsistencies, incomplete information, or miscategorized investments in the information service providers submitted. A second consequence of misaligned plan asset categories, according to Form 5500 stakeholders, is that the “other” plan asset category is too broad. The “other” plan asset category may contain a variety of different types of disparate investments, such as options, index futures, state and municipal securities, hedge funds, and private equity. Seventeen of the 31 respondents indicated that the “other plan asset” category is too broad and poses a very or extremely significant challenge to their work with Form 5500 data. One respondent said that while hedge fund and private equity have very different risk, return, and disclosure considerations from state and municipal securities, all these investments could be included in the “other plan asset” category. This is a growing issue as plan investment in some of these vehicles has grown considerably in recent years. As we found in 2012, according to a Pensions & Investments survey, the percentage of large plans (as measured by total plan assets) investing in hedge funds grew from 47 percent in 2007 to 60 percent in 2010 and the percentage of large plans that invested in private equity grew from 80 percent to 92 percent. Similarly, several respondents said they have seen forms where large portions of plan assets were recorded in the “other” category. A third consequence of misaligned plan asset categories is that the current reporting format of plan investments makes it difficult to see the underlying holdings of indirect investments. According to a recent study, large single-employer defined benefit plans invested about 64 percent of their total assets, on average, in four types of indirect investments in 2010 (see table 2). Thus, a majority of the assets of large single-employer defined benefit plans are reported only as “undifferentiated indirect investments” on Schedule H. Similarly, large single-employer defined contribution plans in 2010 held about 34 percent of their total assets, on average, in indirect investments (see table 2). Almost half of our survey respondents indicated that Schedule H’s lack of transparency into the breakdown of investments held within each trust was a very or extremely significant challenge. Eight out of 11 researchers and 5 of 6 participant representatives from our survey panel indicated this was a very or extremely significant challenge. One ERISA attorney we spoke to noted that he needed to hire a consulting actuary to provide clarity of one plan’s financial investments. Without clear information on these indirect investment vehicles, it is more difficult for Form 5500 stakeholders to assess and analyze the risk of underlying investments of plans using Form 5500 data. For example, PBGC officials acknowledged that plan asset information as currently reported is not very useful to their work and that they have to ask plans for additional information, such as actuarial reports and audited financial statements, to help identify the underlying assets in indirect investments to assess the financial health of the plan. The ability to see into the underlying holdings of indirect investments is further complicated by the difficulty in matching a plan’s investments and returns with those reported in the indirect investment’s filing. Seven of 11 researchers and 3 of 6 participant representatives from our panel cited difficulty in matching up an individual plan’s investment and returns from indirect investments as a very or extremely significant challenge. When plan assets are invested in indirect investments, plan sponsors file a Schedule D, which lists the plan’s interests in each indirect investment; the indirect investment’s filing then provides a breakdown of assets in its own Schedule H (see figs.1 and 2). Although indirect investments are required to file their own Form 5500, they report on their general asset holdings, not the holdings of individual plans invested with them. According to one respondent, there is no way to calculate the fees charged to individual plans. Furthermore, the information reported on Schedule D may not be reliable or complete and may not match the information in the indirect investment’s filing. According to a recent study of 2008 plan year Form 5500 data, more than 35 percent of plans that invested in indirect investments reported data on their Schedules H and D that were inconsistent. In addition to internal inconsistencies, the study showed that matching a plan’s indirect investments with the indirect investment’s own filing can be challenging since 18 percent of reported plan investments could not be matched to a corresponding indirect investment filing. Furthermore, there may be multiple layers of indirect investment entities that complicate efforts to identify the underlying investments individual plans held because it required linking multiple indirect investment filings to a single plan filing (see fig. 1). The top rated change to plan investment information suggested by respondents was to revise Schedule H asset categories to better match current investment vehicles (see appendix II). Consistent with the results of our survey panel, DOL, IRS, and PBGC officials all concurred that plan asset categories do not reflect investments in the current marketplace and should be revised. Challenges with Finding Key Information In addition to challenges with the reporting format, Form 5500 stakeholders identified various instances where information is not collected or easily extracted from Form 5500 data. For example, 11 of 31 respondents indicated that having no standard reporting format for the Schedule of Assets Attachments (Schedule H line 4i) was a very or extremely significant challenge. Attachments to the form can be large— some may be as long as 400 pages—presenting a challenge for users to find necessary information. Also, it can be difficult to conduct aggregate analyses of the information without a unique identifier, such as a Committee on Uniform Securities Identification Procedures (CUSIP) number, as one respondent suggested. Our survey results showed that while most researchers indicated that a lack of a standard reporting format or unique identifier for plan assets attachment is a major challenge, representatives of plan sponsors and service providers and participant representatives did not.indicated that standardizing the format would be a somewhat positive or very positive change and only 3 respondents indicated this change would have a negative impact. Both DOL and PBGC officials said there is no uniform format or uniform identifier to the Schedule of Assets Attachments; however, they acknowledged that attachments would be more useful if they were submitted in a structured, data-searchable format. However, 22 of 31 respondents Respondents also identified additional plan investment information that would be beneficial to add to the Form 5500. First, some respondents— mainly researchers and participant representatives—indicated that a lack of detailed plan asset information on Schedule I (Financial Information- Small Plan) creates a very or extremely significant challenge for their work with the Form 5500. In addition, the Schedule H (Financial Information) is required only with larger plans (plans with 100 or more participants)—which account for approximately 12.5 percent of all plans that filed a form in 2011—leaving the vast majority of plan filings without this critical information. According to one respondent, without the Schedule H, it is not possible to build a complete picture of a plan’s health. Schedule H and the accompanying attachments can help determine if a plan provides a variety of investment options, can help calculate the total cost of the plan for participants, and provide a more complete understanding of plan investments. However, another respondent cautioned against treating small employers like large employers and burdening them with additional filing requirements to satisfy researchers or commercial data mining firms. Second, although the form requires plan sponsors to indicate if a plan includes an auto enrollment feature through the plan characteristic codes, it does not capture which default investment is used.respondents indicated that not knowing the default investment is a very or extremely significant challenge, it was a top-rated suggested change, with only one respondent citing it would have a negative impact. For a list of all the top changes rated by our survey respondents, see appendix II. Challenges with Identifying Plans and Funds Consistently and Other Issues Form 5500 stakeholders also cited challenges with understanding instructions and interpreting inconsistently reported data. Eleven of 31 respondents indicated that inconsistent naming conventions and Employer Identification Numbers (EIN) throughout the form presented a very or extremely significant challenge. One plan preparer said that compiling and recording EINs and Plan Numbers (PN) for each entity recorded on Schedule D can be difficult and time-consuming because plan sponsors, trustees, or service providers do not always provide this information. In addition, one researcher stated that plans often report the wrong number for indirect investments, which hinders the ability to link the indirect investment’s filing with the plan’s filing and prevents accurate attribution of indirect investments. Thirteen of 31 respondents indicated that providing a central repository of EINs and PNs would be a very positive change. PBGC officials also acknowledged that inconsistent EINs and PNs can hinder their ability to analyze records across years. While IRS officials said they have a repository of EINs and assign EINs to filers, they do not share the EINs with the other agencies or make this information publicly available to filers. DOL has taken some steps to improve the reliability of PNs, EINs, and names that filers reported. For example, EFAST2 enables filers to identify previously used names and EINs and form instructions state filers should use the same name and PN as in previous years. In addition, DOL examines plans that stop filing Form 5500s as part of its enforcement activities, and often finds that plans have continued filing, but with inconsistent identification. According to DOL, in an effort to improve filer consistency in entering key identifying information, such as the EIN, PN and Plan Name, DOL’s EFAST program office is developing specifications for cross-year edit checks. These checks aim to verify identifying information submitted on the Form 5500 in order to notify the filer and government agencies of instances where inconsistencies may exist. IRS works with filers to file amended returns to update and correct EINs when they become aware of an issue, according to IRS officials. Some respondents still find the timeliness of Form 5500 to be problematic (see table 1). According to one independent fiduciary, current plan assets are usually quite different from the assets listed in the Form 5500. PBGC officials expressed frustration with Form 5500 reporting deadlines as well. The EFAST2 system has reduced the time it takes to publish Form 5500 data to the public once it is submitted. Under ERISA, however, plan sponsors have a normal deadline of 210 days after the end of the plan year to file and may then apply to IRS for an annual automatic one-time 2 ½ month extension. Thus, plan sponsors can take up to 285 days from the end of the plan year to file their Form 5500 reports. In addition to the challenges and suggested changes mentioned above, respondents identified a number of other changes to Form 5500 investment information that could have a positive impact, such as clarifying: when to use fair value versus contract value on the Schedules H and if a “contract administrator” referred to in Schedule H includes record keepers. For a complete list of suggested changes identified by respondents through our two-phase online panel survey, see appendix II. Stakeholders Cited Misalignment with Other Fee Disclosures and Inconsistent Reporting of Service Provider Fee Information as Problematic Stakeholders identified a wide range of challenges related to Form 5500 service provider fee information. Representatives of plan sponsors and their service providers cited the burden of being required to produce different sets of fee information—one for ERISA fee disclosures and one for the Form 5500. Participant representatives found the format confusing to understand and ineffective for estimating total plan costs. Researchers found the current Schedule C reporting format unhelpful because it provides an incomplete picture of plan fees and aggregate data are not very reliable or comparable. Table 3 lists the top challenges respondents of our two-phase online panel survey identified as having a very significant or extremely significant impact to their work with the Form 5500. Form 5500 Service Provider Information Is Not Aligned with Other Plan Fee Disclosures More than half of respondents indicated that Schedule C reporting misalignment with similar service provider fee disclosures required by ERISA—such as ERISA 408(b)(2) disclosures—as an extremely or very significant challenge (see table 3). As we reported in 2009, this misalignment creates competing sets of fee information for sponsors and service providers, contributes to confusion over what Schedule C requires, is time-consuming for plan sponsors to collect, and is costly for service providers to prepare. One respondent noted that the confusion partly stems from the inconsistent definitions of indirect compensation— fees plans pay “indirectly” to service providers—for ERISA fee disclosures and Schedule C. Another respondent cited confusion among fund managers and plan recordkeepers over the ability for ERISA fee disclosures to satisfy the disclosure requirements for eligible indirect compensation, as defined in Schedule C. GAO-10-54, 21. DOL issued final 408(b)(2) regulations in 2012, but these regulations did not fully address our recommendation because no changes were made to harmonize the final 408(b)(2) service provider disclosures with Form 5500 schedule C service provider reporting requirements. See Reasonable Contract or Arrangement Under Section 408(b)(2) – Fee Disclosure, 77 Fed. Reg. 5632 (February 3, 2012)(codified at 29 C.F.R. pt. 2550) and Amendment Relating to Reasonable Contract or Arrangement Under Section 408(b)(2) – Fee Disclosure/Web Page, 77 Fed. Reg. 41,678 (July 16, 2012)(codified at 29 C.F.R. pt. 2550). to plan sponsors and said they are considering addressing this issue as part of a joint DOL, IRS, and PBGC initiative to revise the form. Challenges with Inconsistent Reporting of Compensation Types Stakeholders identified various challenges to consistently reporting information on service provider compensation. Without consistent information, comparability across plans is limited and, therefore, identifying questionable fees may be difficult. Fourteen of 31 respondents indicated that unclear definitions of eligible indirect compensation, reportable indirect compensation, and direct compensation created a very or extremely significant challenge. According to a 2013 letter from the American Society of Pension Professionals & Actuaries (ASPPA) submitted to DOL, ASPPA members noted that there have been conflicting interpretations of the instructions for Schedule C. For example, one disclosure might reflect certain compensation and expenses of a mutual fund as reportable on one line of Schedule C, while another disclosure for the same type of payments might be reportable on a different line of Schedule C. This uncertainty results in the same types of information being reported inconsistently, as most preparers are not inclined to challenge where the disclosure should appear and, therefore, preparers typically report the information as it is provided to them. While this satisfies the plan’s disclosure obligations, it produces incomparable and less reliable data. Adding to inconsistencies in fee reporting is Ten continuing confusion with the rules on soft dollar disclosure.respondents indicated confusion about the rules on soft dollar disclosure was a very or extremely significant challenge. DOL officials said that it published frequently asked questions (FAQ) in 2009 and 2010 respectively, which included clarification on the reporting of soft dollar compensation, but some respondents indicated confusion and inconsistencies still exist. Stakeholders also identified challenges with collecting compensation information from service providers. Eleven of 15 representatives of plan sponsors and their service providers responding to our survey indicated that receiving service provider fee information in differing formats is extremely or very challenging. According to one preparer, the onus is on the plan sponsor and their preparers to collect the necessary information on Schedule C, since there is no requirement for service providers to report compensation information in a format similar to Schedule C. However, service providers may or may not interpret certain compensation as reportable, leaving DOL with incomplete and incomparable information. For example, service providers may not list certain breakdowns of compensation information in a format that is similar to the Form 5500. In addition, some forms of compensation are not included in information sent to plan preparers. One preparer said they may look at trust reports or public disclosures to try to determine compensation amounts, but compensation may not be reported in the format required for Schedule C and may not include all forms of indirect compensation. Furthermore, for indirect compensation to be considered “eligible” and thus reported on the Schedule C in a more limited fashion, the plan sponsor or plan administrator must receive written materials from the service provider. There is no requirement to send these materials to the plan preparer, so they often cannot independently verify disclosure Ten of 31 survey respondents indicated requirements have been met.that this was a very or extremely significant challenge. Further, in 2010, the Investment Company Institute stated in a letter to DOL that there is a lack of consensus within the mutual fund industry regarding disclosure requirements for documents provided to plan sponsors that would also fulfill eligible indirect compensation disclosure for Schedule C. Challenges with Completeness of Service Provider Fee Information Stakeholders also identified issues with the completeness of service provider and fee information. For example, plan sponsors must report the name, address, and the EIN of the service provider for eligible indirect compensation, but not the compensation amounts or services provided on Schedule C. Compensation categorized as eligible can encompass many services and a significant portion of compensation that plans and In addition, service providers may also choose to participants pay.disclose only the formulas they use to determine reportable indirect compensation, making it difficult for sponsors to calculate fees or understand business arrangements. For example, information on revenue sharing arrangements can be provided in descriptive format and reporting may vary significantly from provider to provider. Fourteen respondents indicated that this was a very or extremely significant challenge. Plans with fewer than 100 participants are not required to complete a Schedule C (Service Provider Information). Furthermore, one respondent pointed out several other exceptions in the Schedule C reporting requirements that apply to large plans that have more than 100 participants. These include: stable value contracts insurance company costs and expenses), (filers are not required to report certain service providers paid less than $5,000 do not have to be listed, compensation reported on Schedule A, and and some associated fees may not be recorded. A stable value contract is an insurance company general account investment that promises a guaranteed rate of return and takes into account various factors, including insurance company costs and expenses, in establishing the guaranteed crediting rate. DOL has said that such insurance company costs and expenses do not involve the insurer receiving reportable compensation for providing services, such as investment management services, for an investment fund portfolio in which the plan invests. Given these various exceptions to fee reporting requirements, Schedule C may not provide participants, the government, or the public with information about a significant portion of plan expenses and limits the ability to identify fees that may be questionable. Most stakeholders indicated that DOL should clarify Schedule C instructions so that plan fees are reported more consistently. Popular suggestions on how to achieve this included: improving instructions for direct, eligible indirect, and reportable indirect compensation (19 of 31 respondents said this would have a very positive impact), improving consistency between DOL’s definition of direct compensation and generally accepted accounting principles (14 of 31 respondents said this would have a very positive impact), and eliminating any distinction between eligible and reportable indirect compensation (13 of 30 respondents said this would have a very positive impact). See appendix II for a complete list of these and other top suggested changes related to fees based on responses from our two-phase online panel survey. DOL officials said that the definition of indirect compensation was intentionally broad to ensure comprehensive reporting of hidden service provider fees. However, they agreed that further clarification of fee reporting is being considered as part of a joint DOL, IRS, and PBGC initiative to revise the form. In 2009, we recommended that asset-based fees should be explicitly reported. We also recommended that DOL provide additional guidance regarding the reporting of indirect compensation and require that all indirect compensation be disclosed. DOL generally agreed with our recommendations; however, these recommendations remain open. Challenges with Service Codes Stakeholders also identified inconsistent reporting of service codes used to describe the types of services provided and compensation received by service providers as a challenge. Thirteen of 31 survey respondents indicated inconsistent reporting of service codes on Schedule C was a very or extremely significant challenge. Without definitions for each service code, the filing community may interpret these codes differently, creating incomparable information across filings. Preparers must select from 55 service codes, and one respondent said that several service codes appear to overlap making it difficult to understand the differences between service codes without more guidance. Plan sponsors and their preparers primarily rely on the service codes that service providers submit, and sometimes service providers report only one code regardless of the number of services they provide. Other service providers break out compensation by service codes, creating a record in Schedule C for each service code instead of including several service codes in one entry which can result in double-counting. One preparer said they may look at the service provider contracts to determine what services were provided and what corresponding service codes to include, but the contracts do not always reflect all the services provided or provide compensation amounts attached to those services. Thirteen of 29 respondents indicated that reducing and simplifying the number of service codes and adding definitions would be a very positive change. Recently, ASPPA offered an example of simplified service codes that could be a model to agency officials. DOL, IRS, and PBGC Face Administrative, Statutory, and Contractual Challenges to Collecting More Useful Form 5500 Information Agencies’ Administrative Processes for Revising the Form and Limited Efforts to Solicit Stakeholder Input Pose Challenges DOL, IRS, and PBGC stated that the process for making form changes is lengthy and involved, and also noted that it varies by agency. DOL officials noted their view that any material changes to the Form 5500 require the use of the informal rulemaking process under the Administrative Procedure Act (APA), which they said can be a time and resource intensive process. IRS and PBGC on the other hand view the form as a data collection instrument and handle changes to the form in compliance with the Paperwork Reduction Act (PRA). Regardless of whether any changes are made to the form, under PRA, each agency is required to solicit public comments on proposed collections of information, such as Form 5500, every 3 years.each agency noted that the comments they receive through the PRA process are generally limited because the notice is focused on reducing respondent burden. As a result, DOL officials told us these comments are not typically useful in gaining insight into the retirement industry’s perception of the challenges they experience with the Form 5500. Through a separate effort, IRS also solicits input from stakeholders through the Information Reporting Program Advisory Committee (IRPAC), which consists of members of the IRS tax form professional community, such as accounting firms and financial services firms, and meets approximately five times a year to discuss issues with all IRS forms. Published IRPAC reports for the last 3 years have included only one recommendation related to making material changes to the Form 5500. In addition, according to PBGC, the Intersector Group, which is composed of representatives from the American Academy of Actuaries, Society of Actuaries, Conference of Consulting Actuaries, and ASSPA, meet twice a year with PBGC and IRS to discuss regulatory and other issues affecting pension practice, including issues involving the Form 5500. Request for Public Comment: Proposed rules are published in the Federal Register and include a request for public comment. The standard comment period is 60 days. developing the final rule, the agency considers comments submitted by the public in response to the proposed regulation. The final regulation sets forth the regulatory changes and may vary from the proposed rule. Final rules determined to be “significant” are subject to review by OMB prior to their publication in the Federal Register. The agencies also use informal methods to solicit stakeholder feedback, but do not systematically collect feedback on an ongoing basis. Officials noted that they attend industry conferences and other venues throughout the year to maintain an informal, ongoing communication with the filing community and often receive comments or suggestions for changes to the form. Additionally, DOL and IRS have call centers that handle questions related to the form, although we found the information collected from these calls was limited. Specifically, DOL officials told us they only collect substantive data on more complex questions call center staff are unable to answer. These calls—representing about 10 percent of calls— are forwarded to DOL’s Office of the Chief Accountant. Similarly, IRS officials noted that the call centers do not track substantive data on the types of questions received and call center staff do not answer specific questions on the Schedules, but may forward them to knowledgeable IRS officials. These officials track information on the questions they receive, which can be used to gain some insight into issues such as lack of form clarity. While PBGC does not have its own Form 5500 call center, the DOL call center forwards specific questions or issues to PBGC. However, officials from all three agencies noted that calls are not generally about suggesting changes to the form. In addition to using call centers, stakeholders may choose to provide the agencies comments on various aspects of the form at any time. DOL provided us with a few examples of letters they received from stakeholders regarding suggested improvements. These letters included suggestions also mentioned by our panelists, including aligning service provider fee disclosure requirements and standardizing plan asset reporting. Despite agencies’ efforts to solicit stakeholder input through rulemaking and other means, stakeholders expressed concerns about their ability to fully participate in the Form 5500 change process. Specifically, several made the following observations about the agencies’ efforts to solicit input: agency officials are not clearly conveying to stakeholders at conferences and other venues that they wish to solicit feedback and comments; the agencies’ overall efforts to solicit informal input is not apparent; DOL and IRS did not appear interested in the thoughts of the filing community regarding changes to the form; and DOL and IRS generally do little to solicit input from non-government stakeholders even when taking into consideration the notice and comment process, although the agencies solicit direct input from stakeholders on an ad hoc basis. Additionally, while the agencies have solicited comments through rulemaking processes and other means, several found that these methods do not necessarily allow the public to contribute to changes that would be most beneficial for non-federal stakeholders. One stakeholder stated that it would be beneficial if the agencies consulted both preparers and the employee benefits plan industry early in the development of changes using focus groups and other means of advance testing. However, none of the agencies have used advance testing methods, such as focus groups, in-person observations, or users’ perception of forms and questions, to obtain non-governmental stakeholder input into changes to the form, despite recent OMB guidance advising such a process be used for complex forms. While DOL officials did not view these techniques as helpful—stating that it would add costs and time and that they were not sure it would help gather constructive, actionable feedback from the filing community—the agency has used focus groups in the rulemaking process to improve the readability of disclosure notices. IRS officials have also noted the potential value in using advance testing methods, stating that some agency sections use these methods in the development and revisions of tax forms and that IRS was willing to consider it for the Form 5500. Further, officials noted that the IRS considered advance testing when developing an annual report pension plans file to identify separated participants with deferred vested benefits, although they ultimately decided to use a notice and public comment process.the form was adopted due to confusion that one official noted may have been avoided had the form been advance tested. Statutory Prohibition on Mandatory Electronic Filing Limits IRS Information Collection on the Form 5500 The IRS has significantly limited its Form 5500 data collection due to a statutory prohibition on mandatory electronic filing. Currently, the IRS is prohibited by statute from requiring persons who file fewer than 250 returns annually to submit information electronically, with limited exceptions. Consequently, IRS removed all information collected exclusively for its benefit from Form 5500 once DOL moved the form to an exclusively electronic filing platform in 2010. As a result, some IRS- only information is no longer collected in any manner, while other information that IRS is statutorily required to collect was moved to newly created paper-based forms. For example, information on minimum coverage—used to determine whether employees that qualify for retirement benefits are properly included—is no longer collected. Conversely, the IRS issued Form 8955-SSA starting in plan year 2009 to satisfy its statutory obligation to provide these data for the Social Security Administration (SSA). Since IRS has stopped collecting certain data elements using the Form 5500, it has expressed concerns about its ability to effectively conduct enforcement activities and to remain current with any statutory and other changes to plans. Further, IRS enforcement officials told us they are no longer able to include data that had been collected via the Form 5500 in the risk models used to identify plans for audits. The Department of the Treasury Inspector General for Tax Administration (TIGTA) examined this issue in 2011 and found that information formerly collected was used to: identify funding or minimum coverage requirements issues, and conduct special projects to target potentially noncompliant retirement plans. Both TIGTA in its report and IRS officials were concerned that the lack of these data would impact IRS’ ability to effectively focus on specific indicators of noncompliance when selecting plans for examination. Specifically, officials noted such information would allow them to better identify noncompliant plans which would, in turn, reduce plan burden and unnecessary examinations. IRS officials also told us that it has resorted to obtaining much of these data while conducting audits rather than to inform the selection of plans to audit, preventing them from taking an efficient risk-based approach to enforcement. Additionally, IRS officials were concerned that their inability to require electronic data collection further hindered their ability to enforce tax provisions that also serve to protect pension benefits, as they are unable to revise the form to collect new plan data critical to compliance and enforcement. Although officials stated that anything short of a statutory change would fall short of achieving the full efficiency of mandatory electronic filing, IRS has made other efforts to foster electronic filing and to allow for more flexibility in making changes to the form. Specifically IRS has: negotiated with DOL to use the form to collect some data that would be of interest to IRS (pursuant to DOL’s authority in light of the limitation on IRS’s authority), included in the past 4 years of the President’s Budget Request, a legislative proposal to provide Treasury with the authority to require additional information be included in electronically filed Form 5500 annual reports, and initiated regulatory action governing electronic filing by proposing in August 2013 regulations that would require electronic filing for plan sponsors and others who file more than 250 returns annually. Revised Electronic Disclosure Rules Could Clarify Use and Better Protect Participant Choice, GAO-13-594 (Washington, D.C.: September 2013). GAO-05-491. systems—with paper potentially costing over 20 times more than electronic. Agencies’ Concerns with Contractual Costs Limits Action on Desirable Form Changes DOL, IRS, and PBGC have identified form changes that would improve data collection, but have made limited changes under the current EFAST2 contract. The tri-agency Form 5500 working group meets regularly to identify and discuss annual changes to the form; however, as DOL officials noted, the working group is not a decision-making body, and the ultimate decision to make changes—with any associated administrative and fiscal burdens—comes from each agency’s various authorities. According to IRS and DOL officials, in 2010 the agencies agreed to impose a “no change year” where they disallowed material changes that would need to be pursued through the rulemaking process based on the desire for the EFAST2 contractor to focus on any problems associated with the new system roll out. After 2010, according to DOL, each agency could independently request form changes at their discretion based on their respective authority, budget, and policy considerations and constraints. However, IRS cannot currently make changes to the electronic form under the EFAST2 contract because the agency is statutorily prohibited from requiring electronic filing. Generally, the agencies have made minimal changes to the plan investment and service provider fee information in the form. According to a recent DOL IG report, from fiscal years 2010-2012, the tri-agency Form 5500 working group proposed 13 form changes of which 7 were adopted. Adopted changes were minor or pursuant to new laws.revisions for 2014, and has begun its discussions for 2015. The working group is finalizing its While officials from DOL and PBGC stated that they could elect to make changes to the form under the current EFAST2 contract, specific contractual limitations apply. The contract provides that the contractor shall update and adapt EFAST2 to accommodate changes in the form from plan year to plan year. The contract further provides that the contractor is responsible for changes that are “typical” in number and substance. Specifically, the contract provides that typical changes are generally minor and related to system functionality. Under the current contract, the contractor is responsible for implementing a typical level of no-cost annual changes into the system at no additional cost to the government. For example, the agencies did not incur additional costs for the seven minor or legislative changes made by the tri-agency working group between fiscal years 2010 and 2012. However, the contract stipulates that in the event of non-typical changes, the contractor and government shall work collaboratively to determine an approach for working through the contractual and/or cost implications. Given the potential additional associated costs with non-typical changes, among other reasons, DOL and PBGC officials generally expressed reluctance to make desirable changes, including those identified during the annual tri-agency review. While DOL, IRS, and PBGC officials expressed concern that non-typical changes would be costly, none of the agencies have obtained cost estimates for potential changes. Despite this lack of information on potential additional costs, agency officials have determined that any cost above the current contract costs would be considered too costly to implement. For example, in addition to the added cost of making non-typical changes, DOL officials noted that accommodating such changes to the Form 5500 into EFAST2 would require contract modifications. Agency officials also noted that they view the contract provision related to non-typical changes as inflexible, despite the ability to negotiate these changes and associated costs. To address these concerns, DOL officials told us that as part of their 21st Century initiative, they have begun to prepare for the next contract, scheduled for 2020, and are considering various contractual and system developmental methodologies that could provide more flexibility in adjusting the processing system to accommodate significant forms changes. However, officials noted that such additional flexibility would likely increase the cost of the contract. Conclusions The Form 5500 is the primary source of information covering $6.35 trillion in pension assets in plan year 2011 and the over 130 million participants and beneficiaries relying on these funds for a secure retirement. For years, we and others have raised concerns regarding the data collected via the Form 5500. The challenges identified by our broad-based panel, while not generalizeable to the broader filing community, provide insight into the difficulties that preparers, researchers, and participant advocates still face with respect to reporting and analyzing critical information on plan investments and fees. The challenges preparers expressed are particularly troubling as DOL and IRS depend on this information to conduct crucial compliance and enforcement activities. As OMB guidance has suggested, poorly designed and unduly complicated forms can prove difficult and confusing to complete. Moreover, if enforcement agencies are working from incomplete, inconsistent, and incomparable data to understand, assess, and oversee plans, vital enforcement activities may be at risk. Despite these long-standing concerns, agency officials have made only routine changes to the plan investment and service provider fee information in the form over the last 3 years. While the rulemaking process and other informal efforts to solicit stakeholder input have provided opportunities for public reaction to proposed changes to the form, these opportunities have been limited and they have not, as OMB guidance suggests, allowed for sufficient input to help shape changes to the form. This input could reduce the agencies’ costs of making subsequent changes, improve filer comprehension, and increase the comparability and reliability of data provided. In addition, it is important that all agencies have the authority to require useful information be collected in a more efficient electronic reporting format. Electronic filing would present an opportunity to reduce costs and potential errors, increase the quantity and quality of information available, and allow for more timely use of the data in protecting the retirement security of 130 million participants. IRS currently lacks the authority to fully address this challenge and, without legislative intervention, will not have all the information it needs to protect the nation’s retirement assets. The 21st Century Plan initiative provides a unique opportunity to address long- standing challenges, identify ways to meet the needs of both federal and non-federal stakeholders, and significantly improve the efficiency, usefulness, and integrity of the information collected. However, the inherent risks of incomplete, inconsistent, and incomparable data warrant immediate action and, as the only nationally-representative data on over $6 trillion in employee benefits, it is critical that these data be of the highest quality. Recommendations for Executive Actions To improve the usefulness, reliability, and comparability of Form 5500 data for all stakeholders while limiting the burden on the filing community, we recommend the Secretaries of DOL and Treasury, and the Director of PBGC consider implementing the findings from our panel when modifying plan investment and service provider fee information, including: revise Schedule H plan asset categories to better match current investment vehicles and provide more transparency into plan investments; revise the Schedule of Assets attachments to create a standard searchable format; develop a central repository for EIN and PN numbers for filers and service providers to improve the comparability of form data across filings; clarify Schedule C instructions for direct, eligible indirect, and reportable indirect compensation so plan fees are reported more consistently and, as we recommended in the past, better align with the 408(b)(2) fee disclosures; and simplify and clarify Schedule C service provider codes to increase reporting consistency. To ease the burden on preparers and ensure the collection of consistent and reliable data, we recommend that the Secretaries of DOL and Treasury, and the Director of PBGC conduct advance testing—such as focus groups, in-person observations and users’ perception of forms and questions—as appropriate and before proposing major changes to the form for public comment, in addition to its other outreach efforts. Matter for Congressional Consideration To improve IRS’s enforcement and compliance efforts, decrease the administrative and financial burden of maintaining both electronic and paper-based form processing systems, and reduce plan reporting costs, Congress should consider providing the Department of the Treasury with the authority to require that the Form 5500 series be filed electronically. Agency Comments and Our Evaluation We provided a draft of this report to DOL, Treasury, and PBGC for review. PBGC generally agreed with our recommendations. DOL and Treasury did not state whether they agree or disagree with our recommendations; but, they stated that actions are underway that would address our first recommendation. GAO continues to believe the recommendations are still valid. DOL, Treasury, and PBGC comments are reproduced in appendices IV, V, VI respectively. PBGC agreed with our first recommendation and stated that the form’s current plan asset categories do not provide users with the means to identify the nature of plan investments and the level of investment risks, adding that improvements in this area are critical to PBGC’s efforts to protect and sustain their insurance programs and monitor plan financial status. PBGC also stated it will work with Treasury and DOL as part of the 21st Century Initiative to address challenges identified in our report. DOL and Treasury did not state whether they agreed or disagreed with the recommendations. DOL stated that in 2013 DOL, Treasury, and PBGC initiated an overall re-examination of the Form 5500, as part of the 21st Century Initiative, whose scope includes all of our recommendations for improvement of Form 5500 plan investment and service provider fee information. While we agree that the 21st Century Initiative was formed during our review in 2013, DOL only provided internal deliberative materials that reflected preliminary thoughts on possible areas for change in the form. Furthermore, our recommendation calls for more than just identification of potential improvements, it calls for action to be taken by the agencies to implement these modifications to the form. Treasury noted that the concerns identified in our study appear valid and the 21st Century Initiative would address some of the suggested changes in our recommendation. However, Treasury noted that it would defer to DOL on suggested changes to service provider fee information since Schedule C is solely within DOL’s jurisdiction. Regarding developing a central repository of Employer Identification Numbers and Plan Numbers, Treasury said that it could improve the comparability of form data across filings and it would need to evaluate the impact of this recommendation in light of major, ongoing initiatives across the IRS to reduce the risk of identity theft and protect taxpayer privacy. PBGC agreed with our second recommendation, and stated that it would work with DOL and Treasury to explore options to conduct advance testing when making revisions to the form. Treasury and DOL did not state whether they agreed or disagreed with the recommendation. Treasury said it would consult with DOL and PBGC on the potential benefits and costs of conducting advance testing before proposing changes to the form. DOL noted that advance testing can be helpful in some cases, but expressed concerns that conducting advanced testing of Form 5500 changes would require additional expense and delay in their lengthy process of making changes to the form. DOL said that changes to the form already include a very transparent and public process of notice and comment rulemaking. However, as noted in the report, stakeholders we spoke to expressed concerns about their ability to fully participate in the Form 5500 change process, noting that agency efforts to solicit feedback were not apparent. DOL also stated that success of the Form 5500 depends on the careful management of software, technology, procurement, and regulatory structures and input from the Form 5500 reporting community, which includes a myriad of stakeholders. It is for these reasons that we believe DOL, Treasury, and PBGC should undergo advance testing prior to proposing major changes to the form. Such input could help ensure that any changes are understood by the service provider and filing community to improve the reliability of the information reported, potentially reduce the burden of annual reporting, and provide greater transparency to all users of form data. We recognize the complexity involved in making changes to the Form 5500 and the variety of stakeholders affected by any changes. Because of this complexity, we feel that it is important to increase input from non-government stakeholders to minimize misunderstanding and confusion and reduce the need to make additional revisions to changes after they are implemented in various systems. This outreach would also have the added bonus of increasing user awareness and understanding of what the form requires. In responding to our draft report, DOL also expressed concern regarding our characterization of recent changes to the Form 5500. Specifically, DOL stated that the report criticizes the agencies for making “minimal changes to the form over the last 3 years” and asserts that the process of making form changes has largely been impeded by “reluctance” on the part of the agencies to engage in negotiations with the contractor that operates the EFAST2 about the possible costs of such changes. DOL stated that the Form 5500 recently was subjected to a major public notice and comment revision in connection with the agencies’ move to a wholly electronic form processing system (EFAST2), and identified other important changes that have occurred over the 3 year time period referenced in our report. However, the changes mentioned in DOL’s comments do not relate to Form 5500 plan investment and service provider fee information. Nonetheless, we amended our report to include these actions taken by the agencies to make changes to other areas of the form. In our report we acknowledge that the agencies face significant administrative, statutory, and contractual challenges to collecting and revising the Form 5500. For example, we noted DOL’s lengthy informal notice and comment rulemaking process under the Administrative Procedure Act (APA) as well as Treasury’s statutory prohibition on mandatory electronic filing. As we noted in the report, while DOL, IRS, and PBGC officials expressed concern that non-typical changes would be costly, none of the agencies have obtained cost estimates for potential changes. Each agency also provided technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Labor, Secretary of the Treasury, and Director of the Pension Benefit Guaranty Corporation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our study sought to examine the following research objectives: 1. Aspects of the Form 5500 Annual Return/Report of Employee Benefit Plan (Form 5500) information on plan investments stakeholders find problematic, 2. Aspects of the Form 5500 information on service provider fees 3. Challenges the Department of Labor (DOL), the Department of the Treasury’s Internal Revenue Service (IRS), and the Pension Benefit Guaranty Corporation (PBGC) face in collecting and revising key annual reporting information on plan investments and service provider fees needed from plan sponsors. To address these research objectives, we used various databases and search tools to identify agency, industry, and academic publications and notices on Form 5500 reporting requirements. We also reviewed regulations and agency guidance on changes to these requirements. We focused our review on our key objectives: disclosure requirements regarding plan investments and service provider fees. This research helped us develop key themes for our two-phase survey of non- government stakeholders. Additionally, we reviewed relevant federal laws, regulations, and agency documentation and guidance related to the administration and maintenance of the Form 5500 series, including proposed rules, documentation of current initiatives aimed towards Form improvement, and the ERISA Filing Acceptance System (EFAST2) vendor contract. To address the research questions in terms of recent changes to Form 5500’s disclosure requirements and possible issues related to the form, we conducted interviews with knowledgeable individuals and organizations and reviewed relevant agency documents. Specifically, we conducted interviews with relevant officials at DOL, IRS, and PBGC. We also conducted interviews with non-government parties who interact with the Form 5500 in different ways—representatives of plan sponsors, service providers, retirement consultants, attorneys, and researchers—to learn how they use the form and what recent challenges, if any, they have encountered. In addition, we reviewed a nonrepresentative sample of 20 Form 5500 Annual Return/Report filings—10 defined contribution plan filings and 10 defined benefit plan filings—from the DOL’s 2012 Freedom of Information Act EFAST2 Form 5500 data set to find potential examples of some of the challenges identified by non-government Form 5500 stakeholders. Specifically, we limited our universe of plan fillings based on the following criteria: filings with plan year ending in calendar year 2012, filings for a defined benefit or defined contribution plan, filings that included a Schedule C, H, and a Schedule of Assets Attachments, and filings that have a non-zero value in the “Other” plan asset category in Schedule H. Based on these criteria, our universe included 4,782 defined contribution plans and 1,361 defined benefit plans. Then, we randomly selected 10 records from defined benefit plans and defined contribution plans from the universe for review. We chose to pull our sample from the Form 5500 dataset for the 2012 plan year because it was the most current and accurate dataset available as of December 2013. To assess the reliability of DOL’s data, we conducted data reliability discussions with EBSA officials to understand the limitations to the data and ensure we were using the correct variables to develop our universe of plans to sample. We found the dataset to be sufficiently reliable for the purposes of our study. Two-phase Online Panel Survey To further address the first and second research questions, we gathered the opinions of a non-representative sample of professionals outside government who interact with the Form 5500 or rely on it in their work by conducting a two-phase questionnaire survey of a panel of 43 non- government stakeholders. We identified initial candidates for this panel from the following sources: Contacts developed from past reports related to employee benefit plan investment and service provider fee information, Participants in our 2011 and 2012 Comptroller General’s Retirement Security Advisory Panel sessions, Form 5500 users identified by the DOL Large pension and investment plan administrators and plan Contacts obtained during interviews and research for our present study. To ensure we had a range of views in our panel, we identified stakeholders from several backgrounds: representatives of plan sponsors, participants, and service providers; researchers and academics; and other subject matter experts from relevant national organizations. We contacted an initial list of 102 stakeholders to explain our research, invite them to participate in our panel, and ask for the names of other potential panelists who may have had expertise in this area. This allowed us to both expand our initial list of potential panelists and to validate the relevance of the potential panelists we had already identified. To be eligible for the panel, we required stakeholders to have at least 5 years of experience with Form 5500. We categorized eligible panelists into one of three groups: representatives of participants, representatives of plan sponsors and their service providers, and researchers. Of the 102 stakeholders we contacted, 43 stakeholders were eligible and agreed to participate in our two-phase panel survey. Panelists had an average of 16 years experience with the Form 5500. In the first phase, we asked this sample of panelists to identify challenges with the Form 5500 plan investment and service provider fee data collection, and to suggest changes that could improve the efficiency, clarity, and usefulness of those data. We compiled their answers and aggregated them into lists of distinct challenges and suggested changes. In the second phase, we asked panelists to rate the importance and potential impact of those challenges and suggested changes respectively. The numbers of panelists surveyed and the sample outcomes of this non- probability sample over both phases are shown in table 4. Additional details of the design and administration of the two phases are provided below. In the first phase of the survey, which was conducted from May 28th to July 9th, 2013, we asked panelists questions about their roles and experiences related to Form 5500 and asked them to describe, in two open-ended questions, any specific challenges that (1) plan investment information, and (2) service provider and fee information collected on the Form 5500 presented for their work. We then asked them, in two corresponding open-ended questions, to suggest potential changes to improve the form. The wording of the four key open-ended questions is presented in table 5 below. Thirty-five of the 43 panelists completed the first phase of the survey, and two of the original panelists had been determined ineligible during the survey due to insufficient experience with the form, resulting in an 85 percent response rate. Panelists who did not complete the first phase were dropped from the panel and did not participate in the second phase. We performed a content analysis on responses from the first phase to the four open-ended questions to aggregate responses into lists of distinct challenges and suggested changes. Two GAO analysts developed an initial coding scheme and coded each of the participant’s responses together and, when necessary, the coding scheme was updated to reflect participants’ responses. We re-contacted respondents when a response to be coded was unclear. Any disagreements in coding decisions were discussed until consensus was reached. A third GAO analyst reviewed a sample of the coded responses to check the validity of the coding decisions. We used the lists of coded answers to each of the four survey questions to develop closed-ended rating questions for the second phase of the survey. In the second phase of the survey, which was conducted from October 23rd to December 11th, 2013, we asked each panelist to rate the significance of the challenges and the impact of the suggested changes the panel had collectively identified in the first phase. Specifically, in the second phase panelists were asked whether each potential challenge was “Not at all,” “Slightly,” Moderately,” “Very,” or “Extremely” significant or if they had “No basis to judge,” and were asked whether each suggested change would have “Very negative,” “Somewhat negative,” “Neutral,” “Somewhat positive,” or “Very positive” impact or if they had “No basis to judge.” Usable responses were provided by 32 of the 35 participants in the second phase, resulting in an overall response rate of 78 percent across both phases. Because the surveys were not administered to a probability sample of all Form 5500 stakeholders, the results cannot be statistically generalized to all Form 5500 stakeholders; the results describe only the experiences and opinions of the stakeholders we included in our panel. Tabulations of the results of the Phase 2 ratings and corresponding question wording are presented in appendix II. We administered both phases of the survey over the Internet. For both phases, we sent each stakeholder an email invitation to complete the survey on a GAO web server using a unique username and password. Because our sample of panelists constituted the entire, unique population of stakeholders we had identified, the survey results are not subject to sampling error. However, the practical difficulties of conducting any survey may introduce other errors. We took steps to minimize errors in measurement, from nonresponse, and in data processing. To minimize the risk of measurement error, we designed draft questionnaires for both phases in close collaboration with GAO survey specialists. Two independent GAO staff members familiar with Form 5500 provided technical comments on the draft questionnaires. We pretested the questionnaires in paper form with 5 individuals representing the range of our stakeholders (representatives of plan sponsors and their service providers, representatives of plan participants, and researchers). We made revisions of the questionnaires based on these efforts before we finalized the surveys. We applied eligibility criteria in the selection of our panelists to ensure that they had sufficient qualifications for their role. We performed quality checks on response data to identify and edit specific response errors. To reduce the impact of nonresponse error, panelists received several emails and phone calls encouraging them to complete the surveys. To increase participation and the validity of answers, panelists were informed that their personally-identifiable information would not be linked to their responses during the survey nor in the final report. We examined the distribution of those not responding across the three stakeholder categories in the survey population, and determined that response rates did not differ markedly across them. A second analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. We conducted our work from November 2012 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Results for Suggested Changes to Plan Investment and Service Provider Fee Information from the Phase 2 Survey of our Two-phase Form 5500 Stakeholder Panel In the second phase of the survey of our two-phase Form 5500 stakeholder panel, we asked each panelist to rate the significance of the challenges and impact of the changes the panel had collectively identified in the first phase. In the second phase, panelists were asked whether each potential challenge was “Not at all”, “Slightly”, “Moderately”, “Very”, or “Extremely” significant or if they had “No basis to judge”, and were asked whether each suggested change would have “Very negative”, “Somewhat negative”, “Neutral”, “Somewhat positive”, or “Very positive” impact or if they had “No basis to judge.” For plan investment changes, we specifically asked: How much of a positive or negative impact (please consider costs, resources, burden, and benefit) for your (and your clients’) work would each of the following potential changes to Form 5500 have on the overall clarity, efficiency, or usefulness of plan investment information? Table 6 lists the top plan investment potential changes indicated by survey respondents. For service provider fee changes, we specifically asked: How much of a positive or negative impact (please consider costs, resources, burden, and benefit) for your (and your clients’) work would each of the following potential changes to Form 5500 have on the overall clarity, efficiency, or usefulness of service provider and fee information? Table 7 lists the top service provider fee potential changes indicated by survey respondents. Appendix III: 2013 Form 5500 and Select Schedules Appendix IV: Comments from Department of Labor Appendix V: Comments from Department of Treasury/Internal Revenue Service Appendix VI: Comments from the Pension Benefit Guaranty Corporation Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Charles A. Jeszeck, Director, (202) 512-7215 or jeszeckc@gao.gov. Staff Acknowledgments In addition to the contact named above, David Lehrer (Assistant Director), Suzanna Clark, Carl Ramirez, Ryan Siegel, Michael Silver, Salvatore Sorbello, and Amber Yancey-Carroll made key contributions to this report. Also contributing to this report were Amy Bowser, Holly Dye, Kathy Leslie, Kristine Hassinger, Sheila McCoy, Jonathan McMurray, Libby Mixon, Mimi Nguyen, Jason Palmer, Roger Thomas, Robyn Trotter, Craig Winslow, and Jill Yost. Related GAO products Private Pensions: Clarity of Required Reports and Disclosures Could Be Improved. GAO-14-92. Washington, D.C.: November 21, 2013. Private Pensions: Revised Electronic Disclosure Rules Could Clarify Use and Better Protect Participant Choice. GAO-13-594. Washington, D.C.: September 13, 2013. Private Sector Pensions: Federal Agencies Should Collect Data and Coordinate Oversight of Multiple Employer Plans. GAO-12-665. Washington, D.C.: September 13, 2012. 401(k) Plans: Increased Educational Outreach and Broader Oversight May Help Reduce Plan Fees. GAO-12-325. Washington, D.C.: April 24, 2012. E-Filing Tax Returns: Penalty Authority and Digitizing More Paper Return Data Could Increase Benefits. GAO-12-33. Washington, D.C.: October 5, 2011. 401(k) Plans: Improved Regulation Could Better Protect Participants from Conflicts of Interest. GAO-11-119. Washington, D.C.: January 28, 2011. Private Pensions: Additional Changes Could Improve Employee Benefit Plan Financial Reporting. GAO-10-54. Washington, D.C.: November 5, 2009. Private Pensions: Government Actions Could Improve the Timeliness and Content of Form 5500 Pension Information. GAO-05-491. Washington, D.C.: June 3, 2005.
The Form 5500 is the primary means of collecting information for use by the federal government and the private sector on retirement plan assets, which exceeded $6 trillion in fiscal year 2011. Stakeholders, including those who prepare and use the form, have raised concerns about the quality and usefulness of form data. GAO was asked to review Form 5500 plan investment and fee information. In this report, GAO examined: (1) stakeholder problems with Form 5500 plan investment information; (2) stakeholder problems with Form 5500 service provider fee information, and (3) challenges DOL, IRS, and PBGC face in collecting and revising Form 5500 information. GAO surveyed a panel of plan sponsors, service providers, representatives of plan participants, and researchers; interviewed agency officials; and reviewed studies on Form 5500 data. In a two-phase online GAO survey, stakeholders identified problems with the usefulness, reliability, and comparability of data from the Form 5500 (see table). Despite longstanding concerns with the Form 5500—the annual report that employee benefit plans file with the federal government—agency officials have made only minimal changes over the last 3 years. Key Challenges Identified with Form 5500 Stakeholders said the form's information on service provider fees was misaligned with other required fee disclosures, and also cited various exceptions and gaps in current reporting requirements as major challenges. Specifically, Form 5500 service provider fee information does not align with other information that service providers must disclose to plan sponsors, forcing providers to produce two different sets of information. Also, differences in service provider compensation types and the lack of definitions for codes designating the types of services provided can result in inconsistent and incomplete data being reported. Other exceptions and gaps in service provider information result in an incomplete picture of plan fees. For example, large plans—those with 100 or more participants—are not required to report fee information for certain types of compensation and small plans file only limited fee information. The Department of Labor (DOL), the Internal Revenue Service (IRS), and the Pension Benefit Guaranty Corporation (PBGC) face significant administrative, statutory, and contractual challenges to collecting and revising the annual reporting information required for regulating private pensions. While the rulemaking process and other informal efforts to solicit stakeholder input have provided opportunities for public reaction to proposed changes to the form, these opportunities have been limited and have not included the advance testing OMB guidance suggests. Stakeholder input could lower costs by reducing subsequent changes, improve filer comprehension, and increase the comparability and reliability of the form's data. Additionally, a statutory prohibition against requiring electronic filing caused IRS to remove certain data elements from the Form 5500 after DOL mandated electronic filing of the form. If IRS were able to require electronic filing, it could add the data elements back to the form, which would improve its compliance, restore robust information to its enforcement activities, and decrease its data collection costs.
Background Because of the catastrophic nature of flooding and the difficulty of adequately predicting flood risks, private insurance companies have largely been unwilling to underwrite and bear the risk of flood insurance. Under NFIP, the federal government assumes liability for flood insurance losses and sets rates and coverage limitations, among other responsibilities. Since its inception, NFIP, to a large extent, has relied on the private insurance industry to sell and service policies, as Congress envisioned when it authorized the program in 1968. The authorizing legislation provides broad authority for FEMA to work with the private insurance industry, and over time, FEMA has utilized several arrangements with private insurers, including with companies themselves and with a single vendor. Because of customer complaints and stagnant policy growth, in 1983, FEMA established the WYO program. According to FEMA, the goals of the WYO program are to increase the NFIP policy base and the geographic distribution of policies, improve service to NFIP policyholders through the infusion of insurance industry knowledge, and provide the insurance industry with direct operating experience with flood insurance. In 1986—the first year of the WYO program—48 WYO insurance companies were responsible for about 50 percent of the more than 2 million policies in force. As of September 2008, about 90 WYO insurance companies accounted for 97 percent of the nearly 5.6 million policies in force at that time. Because WYOs are not risk-sharing insurers, they are not paid an explicit profit percentage or amount. Private insurers become WYOs by entering into an arrangement with FEMA (the Financial Assistance/Subsidy Arrangement) to issue flood policies in their own name. The insurers must have experience in property and casualty insurance lines, be in good standing with state insurance departments, and be capable of adequately selling and servicing flood insurance policies. They must also comply with the provisions of FEMA’s Control Plan, which outlines the companies’ responsibilities for program operations, including underwriting, claims adjustments, cash management, and financial reporting, as well as FEMA’s responsibilities for management and oversight. WYOs adjust flood claims and settle, pay, and defend all claims arising from the flood policies. Insurance agents from these companies are the main point of contact for most policyholders. Based on information the insurance agents submit, WYOs issue policies, collect premiums, deduct an allowance for commission and operating expenses from the premiums, and remit the balance to NFIP. In most cases, insurance companies hire subcontractors—flood insurance vendors—to conduct some or all of the day-to-day processing and management of flood insurance policies. When flood losses occur, policyholders report them to their insurance agents, who notify the WYO insurance companies. The WYO companies review the claims and process approved claims for payment. FEMA reimburses the WYO insurance companies from the National Flood Insurance Fund for the amount of the claims plus expenses for adjusting and processing the claims, using rates that FEMA establishes. Claims amounts may be adjusted after the initial settlement is paid if claimants submit documentation showing that some costs were higher than estimated. FEMA Does Not Systematically Consider WYOs’ Actual Expenses When Setting Payment Rates FEMA does not systematically consider actual flood insurance expense information when it determines the amount it pays WYOs for selling and servicing flood insurance policies and adjusting claims. Since the inception of the WYO program, FEMA has used proxies to determine the rates at which it pays WYOs. For example, payments for operating expenses are determined annually based on the average industry operating expenses for five lines of property insurance. WYOs’ actual flood insurance expense information has been available since 1997, when the companies began reporting the data to NAIC. However, FEMA has not systematically considered these data when setting its payment rates, and thus does not determine in advance the amounts built into payment rates for estimated expenses and profit. Further, FEMA has not, after the end of each year, compared the WYOs’ actual expenses to payments it makes to the WYOs. Because FEMA does not routinely take WYOs’ actual flood expenses into account when calculating payments and does not analyze actual payments and WYO flood insurance expenses, it does not have the information it needs to determine whether its payments are appropriate and how much profit is included in its payments to WYOs. FEMA has occasionally modified its methods for determining the amount of expense payments, but only the last of these modifications, made in 2008, has taken into account the amount of actual WYO insurance expenses. In 2001, FEMA increased its payments to WYOs for servicing flood policies by an additional 1 percent of written premiums after some WYOs told FEMA that the payment amounts, based on the proxy used, were not sufficient to cover their operating expenses. FEMA did not take into consideration WYOs’ actual expenses in making these additional payments, which continued each year since 2001 and totaled about $25 million in fiscal year 2007. However, we found that the payments to the six WYOs we reviewed exceeded their actual operating expenses even before these payments were increased by an additional 1 percent of written premiums. FEMA did consider actual flood insurance expenses in 2008 when it changed its method of paying claims processing expenses. Beginning in fiscal year 2008—in response to the significant increase in total payments made to WYO companies in fiscal year 2005 and 2006 following the 2004 and 2005 hurricanes—FEMA changed its method for paying claims processing expenses to take into account actual flood expense data obtained from a selected number of WYO companies. These examples illustrate the benefit of considering actual flood expense data in administering the WYO program. We recognize that the consistency of WYOs’ reporting to NAIC needs to be improved in order for data on the companies’ expenses to be fully utilized. For example, we found that, among other things, some companies reported their flood insurance expenses to NAIC after offsetting them with the payments they received from FEMA. We also found that the actual expenses of one of the six companies we reviewed included payments made under service agreements with an affiliated company that may include profit distributions that should not be included in the expense amounts considered when setting payment rates. Nevertheless, we were able to use NAIC flood insurance data, supplemented with information obtained from WYO company officials, to compare the actual flood insurance expenses our six selected companies incurred and the payments they received for calendar years 2005 through 2007. We found that FEMA’s payments exceeded the companies’ actual expenses by $327.1 million, or 16.5 percent of total payments made. Our results highlight the importance of FEMA’s considering actual flood expense data in administering the WYO program. In accordance with our Standards of Internal Control in the Federal Government, FEMA should ensure that its payment rates to WYOs are appropriate by, for example, comparing payments with actual flood insurance expenses. Further, federal managerial cost accounting standards state that reliable cost information is critical to the proper allocation and stewardship of federal resources and that actual cost information is an important element agency management should consider when setting payment rates. FEMA Has Not Aligned Its Bonus Structure with Its Long-Term Goals for NFIP FEMA has not aligned its bonus structure for WYOs with its goals for NFIP, such as increasing penetration in low-risk flood zones, among homeowners without federally-related mortgages in all zones, and in geographic areas with repetitive losses and low penetration rates. Instead, FEMA uses a broad-based distribution formula that awards a bonus of 0.5 percent to 2 percent of the premiums collected if WYOs achieve a 2 percent to 5 percent net growth in policies on an annual basis. This formula primarily rewards companies that are new to NFIP, when it is easiest to increase the percentage of net policies from a small base. Further, we found that most WYOs generally offered flood insurance when requested but did not strategically market the product as a primary insurance line. As a result, any sales increases may in fact result from external factors that are outside the companies’ control, rather than from marketing efforts—factors such as flood events, changes in the housing market, and economic developments. For example, sales of flood insurance tend to rise after flooding events, and FEMA’s Floodsmart media marketing campaign, which also has a goal of increasing flood policies by 5 percent annually, may also impact flood insurance sales. Moreover, FEMA does not review the WYOs’ marketing plans and therefore lacks the information needed to assess the effectiveness of either the WYOs’ efforts to increase participation or the bonus program itself. The Government Performance and Results Act of 1993 requires agencies to conduct systematic studies to assess how well programs are working. When program results could be influenced by external factors, agencies can use intermediate goals to identify the program’s discrete contribution to a specific result. Although a study funded by FEMA suggested that the agency should focus on increasing market penetration in low-risk flood zones, in targeted geographical areas, and in small, special high-risk flood hazard areas, FEMA has not set targeted market penetration goals beyond its 5 percent goal of increasing policy growth. Having intermediate targeted goals could help expand program participation, and linking such goals directly to the bonus structure could help ensure that NFIP and WYO goals were in line with each other. FEMA Followed Some but Not All of Its Internal Control Requirements and Procedures FEMA has explicit financial control requirements and procedures for overseeing the WYO program. FEMA’s Control Plan provides guidance for WYOs that is intended to ensure compliance with the statutory requirements for NFIP and that contains several checks and balances to help ensure that taxpayers’ funds are spent appropriately. The plan has four major components that include requirements for: (1) monthly data and financial reporting, (2) claims reinspections by FEMA’s contractor, (3) various audits by independent CPAs, including required biennial audits, audits for cause, and state insurance department audits, and (4) triennial operation reviews by FEMA staff. FEMA’s Standards Committee is responsible for ensuring that participating companies are complying with the requirements. For the 10 WYOs in our sample, FEMA followed some but not all of the requirements and procedures of the Control Plan and did not systematically track the outcomes of the various audits, inspections, and reviews. Our review of FEMA’s records for these WYOs showed the following: FEMA collected nearly all of the required monthly data submissions. WYOs from our sample whose claims were selected for reinspections were reinspected according to the Control Plan’s methodology, and evidence of these activities was provided. Biennial audits and underwriting and claims triennial reviews were also mostly implemented. FEMA officials said that they focused on claims and underwriting reviews because these areas were the most important to determining whether claims reimbursements to WYOs were appropriate. Other audits, including audits for cause, state insurance department audits, and marketing, litigation, and customer service triennial operation reviews, were rarely or never implemented. FEMA officials said that they no longer performed marketing, litigation, and customer service operations reviews because each of these functions were being reviewed by other means. However, FEMA could not provide us with evidence that these reviews met the Control Plan’s requirements. In addition, we found that WYO compliance with each component of the Control Plan was the responsibility of multiple units, and FEMA did not maintain a single, comprehensive monitoring system that would allow it to ensure compliance with all components of the plan. That is, FEMA did not centrally store WYO-specific evaluations, inspections, audits, or reviews that were to be performed in accordance with the Control Plan. FEMA officials told us that various staff within FEMA or its contractor was responsible for ensuring that appropriate documentation of oversight efforts were maintained. These officials told us that there was no centralized access, either physical or electronic, to all of the documentation produced in overseeing WYOs under the Control Plan. Systematically tracking compliance with the Control Plan could ensure that participating WYOs are collecting appropriate premiums and making appropriate claims payments. Since most payments made to WYOs are based on premiums collected and claims paid, adequate enforcement of the Control Plan is important to ensuring that WYOs are being compensated appropriately. Because FEMA does not implement all aspects of the Control Plan, it cannot ensure that the WYOs are fully complying with program requirements. Alternative WYO Program Administrative Structures Could Be Used to Incorporate Competition into the Payment Process FEMA’s current relationship with WYOs facilitates insurance companies’ participation in NFIP. But, as previously discussed in this report, this relationship is based on a payment structure that may not reflect the actual expenses these companies incur. We examined three alternative administrative structures that could replace NFIP’s payment arrangement with a competitively awarded contract that could lower costs for selling and servicing flood insurance policies and administering claims: contracting with one or more insurance companies, contracting with a single vendor (similar to the NFIP Direct program), or contracting with multiple vendors and maintaining the WYO network. Each of these alternatives has advantages and disadvantages in terms of the potential impact on the basic operations of administering flood insurance policies and adjusting claims, as well as on FEMA’s oversight of the program and its contractors. For example, contracting with one or more insurance companies might lower FEMA’s costs for the program through competitive bidding. But most insurance company officials we spoke to said that they did not want to be federal contractors because of the regulations that would apply and emphasized that they had agreed to participate in the WYO program only because it was not based on an explicit federal contract. Further, contracting with a single vendor, as FEMA does under the current NFIP Direct program, might be less expensive but would almost completely eliminate insurance companies’ participation and their network of insurance agents. Experts we spoke with also pointed out that using a contractor to administer the flood program failed in the early 1980s due to the contractor’s lack of experience in administering insurance policies. Finally, contracting with multiple vendors to service flood policies would allow FEMA to keep the WYO network and might make oversight more effective because FEMA would have a contractual relationship with significantly fewer companies. But experts we spoke to said that this structure would encroach on WYOs’ ability to use a subcontractor to administer their flood line. Flood consultants, vendors, and trade groups we spoke to were more receptive to exploring an alternative structure using multiple vendors. Conclusions Given the significant risk exposure to the federal government, it is imperative that FEMA carry out its stewardship responsibilities by effectively and efficiently overseeing the WYO program and the more than 90 participating insurance companies. FEMA has taken some steps to address these issues, including taking into consideration the actual expenses of a selected number of WYOs before changing its method for paying claims expenses and preparing a revised draft of its Control Plan, which had not been updated since 1999. Additional opportunities exist for FEMA to improve its oversight of the WYO program and ensure that payments to the participating insurance companies are based on actual company expenses, thereby improving the program’s cost-effectiveness. However, our review demonstrates the following: FEMA sets rates for paying WYOs for their services without knowing how much of its payments actually cover expenses and how much goes toward profit. Specifically, it does not determine in advance the amounts built into the payment rates for estimated expenses and profit; annually analyze the amounts of actual expenses and profit in relation to the estimated amounts used in setting payment rates; or consider the results of the analysis of payments, actual expenses, and profit in evaluating the methods for paying WYOs. Moreover, it does not have a sound basis for its practice of paying WYOs an additional 1 percent of written premiums for operating expenses. As a result, FEMA does not have the information it needs to determine whether its payments to WYOs are reasonable. FEMA has not tied its bonus structure to the long-term strategic goals for the program. As a result, it cannot be assured that the WYO program is achieving its intended goals in the most cost-effective manner. Moreover, FEMA does not collect the information on the WYOs’ marketing efforts, which is needed to determine whether the companies’ marketing efforts are aimed at helping to promote increased participation among targeted groups and in targeted areas in line with NFIP goals. FEMA has not consistently implemented all aspects of its current Control Plan and does not systematically track WYOs’ compliance with the plan’s requirements. As a result, FEMA cannot ensure that the WYOs are fully complying with NFIP requirements, including oversight of the various payments that depend on accurate premiums collected and appropriate claims made. Recommendations for Executive Action To provide transparency and accountability over the payments FEMA makes to WYOs for expenses and profits, we recommend that the Secretary of Homeland Security direct the Under Secretary of Homeland Security, FEMA, to determine in advance the amounts built into the payment rates for estimated expenses and profit; annually analyze the amounts of actual expenses and profit in relation to the estimated amounts used in setting payment rates; consider the results of the analysis of payments, actual expenses, and profit in evaluating the methods for paying WYOs; and in light of the findings in this report, immediately reassess the practice of paying WYOs an additional 1 percent of written premiums for operating expenses. To increase the usefulness of the data reported by WYOs to NAIC and to institutionalize FEMA’s use of such data, we recommend that the Secretary of Homeland Security direct the Under Secretary of Homeland Security, FEMA, to take actions to obtain reasonable assurance that NAIC flood insurance expense data can be considered in setting payment rates that are appropriate, including identifying affiliated company profits in reported flood insurance expenses, and develop comprehensive data analysis strategies to annually test the quality of flood insurance data that WYOs report to NAIC. If FEMA continues to use the WYO bonus program, we recommend that the Secretary of Homeland Security direct the Under Secretary of Homeland Security, FEMA, to improve it by considering the use of more targeted marketing goals that are in line with FEMA’s NFIP goals. To improve oversight of the WYO program and compliance with program requirements, we recommend that the Secretary of Homeland Security direct the Under Secretary of Homeland Security, FEMA, to consistently follow the Control Plan and ensure that each component is implemented; ensure that any revised Control Plan include oversight of all functions of participating WYOs, including customer service and litigation expenses; and systematically track insurance companies’ compliance with and performance under each component of the Control Plan and ensure centralized access to all the audits, reviews, and data analyses performed for each participating insurance company under the Control Plan. Agency Comments and Our Evaluation We received written comments on a draft of this report in a letter from the Department of Homeland Security’s Director, Departmental GAO/OIG Liaison Office, which is reproduced in appendix III. FEMA concurred with our recommendations regarding (1) the usefulness of the data that WYOs report to NAIC, (2) the alignment of the bonus structure with long-term NFIP goals, and (3) the oversight of the WYO program. First, the letter noted that FEMA would work with NAIC to improve the quality of the flood expense data that WYOs report and would include the data as an additional item in determining the annual WYO expense allowance. Second, the letter stated that FEMA planned to examine the incentive bonus prior to making arrangements with WYOs for 2010 and 2011. FEMA said that this examination is to include an assessment of the incentive’s effectiveness in increasing policies; the need for such an incentive; and possible alternatives to it, including identifying target markets where penetration is low and providing incentives for increasing policies in those markets only. Third, FEMA concurred with our recommendations regarding WYO program oversight, although it stated that the litigation, marketing, and customer service reviews were no longer included in the revised Control Plan because they were completed in other ways. Given the newness of these changes, this review did not include an assessment of FEMA’s compliance with these alternative methods or their robustness relative to the Control Plan. Finally, the letter stated that FEMA had implemented new processes to improve the monitoring of WYOs’ compliance with the Control Plan and would continue to look for ways to improve oversight in the future. While the letter did not provide details about the new monitoring processes, we are encouraged by these new steps and will be following up on these activities in our ongoing work. FEMA did not concur with our recommendations on improving the transparency and accountability of payments made to WYOs, specifically our recommendation that FEMA consider WYOs’ actual expenses and profits when setting its payment rates. In its response, FEMA provided its views on issues that it believes impacted our analysis and the conclusions we drew from our work. Also, FEMA discussed why it does not consider actual flood insurance expense information. We disagree with FEMA’s assertion that the issues it raised resulted in our reaching misleading conclusions, and we continue to recommend that when setting payments rates, FEMA should consider actual flood insurance expenses and the profits that result from its payments to WYO companies. Specifically, FEMA stated that our review was limited to only six companies, which FEMA believes are the low-cost operators for the five other lines of insurance used to determine the WYO expense allowance. FEMA stated that it seems reasonable that these companies would also have some of the lowest flood operating expenses and, therefore, conclude that the results of our analysis can be expected to significantly understate the operating expenses of the WYO companies as a whole. Our analysis of the expenses and profits of these companies, which represented 53 percent of total net premiums written, 71 percent of total claims losses paid, and 59 percent of total expense payments made by the WYO program for fiscal years 2005 to 2007, demonstrates the importance of information that FEMA does not have about actual expenses and profits that it was paying—information that we consider critical for making decisions regarding the proper administration of NFIP. FEMA stated that we did not perform a review of the stability of the federal flood expenses because the results for other years were not available to us. FEMA also stated that a review of the stability of federal flood expenses would show the inadvisability of reaching any conclusions from just 1 year of data and that basing compensation on a single year of data is always questionable, especially since our analysis, and the adjustments and assumptions we made in conducting our analysis, have not been vetted. However, our analysis showed that variances in profit over the 3 years we reviewed were caused by, among other things, variations in the expenses incurred to adjust and pay claims losses that also fluctuated from year to year. Moreover, we recognize that setting payments based on a single year of data may not be appropriate. Our recommendation that FEMA consider actual flood insurance expenses and profits in setting payment rates would not limit FEMA’s consideration of actual expenses and profits to a single year of data. We anticipate that FEMA would annually perform an analysis of actual expenses and profits for the current year, and then incorporate that result into its analysis of these data covering the number of years that may be appropriate in the view of FEMA management. The results of the longitudinal analysis would be used to evaluate the rates being used and to determine in advance if a change to the rates is needed. Moreover, we agree that time should be allowed for others, such as the WYO companies and NAIC, to weigh in on the methodology for analyzing payments to the WYOs and their actual flood expenses. Importantly, however, any adjustments we made to the flood expenses reported by the WYOs for the purpose of our analysis were the result of information we obtained from and numerous discussions with WYO company officials. FEMA stated that actual expenses will be as much of a lagging indicator as the current methodology that uses A.M. Best numbers. FEMA also stated that even if actual expense data is considered to be completely reliable, by the time NFIP could use it to lower expense ratios, about 2 to 3 years would have lapsed. FEMA uses the average expenses for five lines of property insurance other than the federal flood line for setting the operating expense payment rate. We recognize that considering WYOs’ actual flood expenses will be a lagging indicator of the costs to service flood insurance policies. However, it will be a better indicator than FEMA’s current methodology precisely because it will not reflect the trend of expenses for other lines of property business. Importantly, data now used to set payment rates based on other lines of business are subject to events and market forces that affect their expense ratios, but which are not relevant to the WYO program. Our recommendation that FEMA use actual flood expenses to set payment rates would differ from its current methodology in one important aspect: actual expenses and not a proxy would be used to set those rates. FEMA stated that our analysis assumes that actual WYO company expenses are stable, which FEMA concludes could yield misleading results. FEMA also stated that during the last 5 years insurance companies have managed to significantly reduce their operating expenses in other lines, and suspects that many of those efficiency gains also made it into companies’ flood insurance operations. Our analysis was not based on any assumptions about the trends in WYO company expenses, in general, or flood expenses, in particular. Rather, we analyzed the actual flood expenses of selected companies over a 3-year period and compared the payments to the companies’ actual flood expenses. As previously indicated, we observed fluctuations from year to year in actual flood expenses—in particular, expenses for adjusting and processing claims. Our recommendation that FEMA consider actual flood expenses and profit when setting payment rates will move FEMA from not knowing (“suspecting”) the trend in actual flood expenses to considering those trends when setting rates, and not continuing to utilize proxies of other lines of business and the trends in those other lines that may not be relevant to the WYO program. Whether actual expenses are stable or otherwise is not relevant. FEMA stated that while we acknowledge in the body of our report that the years we reviewed—2005 to 2007—included the heaviest loss years in the history of the program and that these years are not indicative of typical years for loss adjustment expenses, we do not carry these caveats forward to our conclusions. FEMA stated that this results in a significant distortion of the expense reimbursement to WYO companies for the loss adjustment expenses. We did consider the unusually high losses in 2005 and 2006 when reaching our conclusion that FEMA sets rates for paying WYOs for their services without knowing how much of its payments cover expenses and how much is for profit. An analysis of actual expenses over time would enable FEMA to identify and correlate trends in actual WYOs’ flood expenses to flood events and related claims losses. In fact, such an analysis could have helped FEMA to determine before the hurricanes of 2004 and 2005 that its method for paying claims processing expenses would result in significant payments in excess of actual expenses in heavy loss years. FEMA also stated that it addressed the problem that led to outsized WYO compensation by changing how WYOs are paid for claims processing expenses—referred to as Unallocated Loss Adjustment Expenses (ULAE). As support, FEMA cited the fact that WYOs’ compensation for ULAE would have been $29 million less and $267 million less in fiscal years 2005 and 2006, respectively, and would have been $9 million more in 2007. This would have been a combined decrease of $287 million for the 3 years had this new payment schedule been in place then. Further, FEMA said that had the new payment schedule been in place in those years, it is likely that most, if not all, of the $155 million in profit from claims adjusting and processing that we reported for the six companies we reviewed would disappear. Prior to 2008, FEMA paid WYOs 3.3 percent of claim losses incurred for claims processing expenses. Beginning in 2008, FEMA began paying the WYOs 1 percent of net premiums written and 1.5 percent of claim losses incurred for their claims processing expenses. Our analysis showed that for the years 2005 to 2007 FEMA paid the six WYOs in our analysis profits of $327.1 million, including $155.2 million for claims adjusting and processing expenses, without knowing the actual flood expenses of any of these companies. FEMA’s statements that it is not clear how much of its “savings” would have been borne by the six WYOs we reviewed and that FEMA can only speculate as to the effect the change would have on the companies’ profit support our finding that FEMA does not know how much of its payments are for actual flood expenses and how much are for profit. Our point is that FEMA should know how much it is paying for expenses and for profit. In our judgment, considering actual expenses and profit in setting payment rates would result in a fair and equitable treatment of policyholders and the WYO companies over time, as well as serve to better protect the interests of taxpayers who ultimately bear the risk of losses from the WYO program. In discussing why it does not consider actual flood insurance expenses in setting payment rates, FEMA said that the WYO flood insurance program is based on companies’ applying their normal business practices to NFIP and that these practices are bound to vary from company to company, and that it would be impossible for NFIP to accurately calculate actual expenses for 90 companies. FEMA also said that because of these two factors, and the fact that in the early years of the program actual flood insurance expenses of the companies’ were not available, the decision was made to use information on other lines of insurance business from A. M. Best as a proxy in setting rates for payments to NFIP companies. FEMA also stated that even now, when some WYOs’ flood insurance expense information is available, FEMA is not certain how accurate this information is, and that its management is skeptical that using actual flood insurance expenses, as GAO recommends, would yield lower payment rates than would result from the proxies that the program uses to set payment rates. FEMA further stated that it will work with NAIC to improve the quality of the flood expense data. We agree that business practices will vary among the participating companies and we agree with FEMA’s statement that actual flood insurance expenses of WYOs were not readily available 25 years ago, when the program started. However, the National Association of Insurance Commissioners (NAIC) began requiring that companies report financial information on their federal flood insurance business in 1997. Therefore, continuing to use other lines of business as proxies for setting WYO program payment rates is no longer necessary. Moreover, continuing with the same practice without assessing the reasonableness of the payments made to WYOs by comparing those payments to the WYOs’ actual expenses does not provide sufficient justification or accountability for hundreds of million of dollars in federal program expenses. We are encouraged by FEMA’s statement that, in the future, it will consider actual flood insurance expenses WYOs report to NAIC as an additional item when determining the annual WYO expense allowance, which is intended to cover the companies’ operating, marketing, and administrative expenses. While this is a positive step, given the changes in the program and available information, we continue to recommend that FEMA consider all categories of expenses when setting payment rates, including payments for commissions, claims adjusting, and other claims- related expenses. Consideration of all categories of actual flood insurance expenses reported by WYOs in setting payment rates for these expenses, as well as the profits that the program pays to the companies for their participation in NFIP, is necessary for FEMA to know whether its payments to the WYOs are reasonable. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will provide copies to the Chairman of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member of the House Committee on Financial Services; the Chairman and Ranking Member of the House Committee on Homeland Security; and other interested committees. We are also sending a copy of this report to the Secretary of Homeland Security and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff has any questions about this report, please contact Orice Williams Brown at (202) 512-8678 or willamso@gao.gov, or Jeanette M. Franzel at (202) 512-2600 or franzelj@gao.gov. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Briefing Slides Opportunities Exist to Improve Oversight of the We provided a draft of this presentation to FEMA for its review and they agreed with the content. Appendix II: Scope and Methodology WYO Expenses To assess FEMA’s practice of determining the amounts it pays to WYO insurance companies for their services without considering the companies’ actual expenses, we compared the payments FEMA made to six WYO companies to the companies’ actual flood insurance expenses. Insurance companies report flood insurance expense data in annual statements that are submitted to NAIC, which also include expenses for the companies’ other property and casualty lines of business. The six WYOs we selected wrote flood insurance policies whose premiums totaled approximately 53 percent of the total WYO program premiums in fiscal year 2007. Our sample is not a representative sample of all WYOs, so the results of our analysis cannot be generalized to the universe of WYOs. We reviewed NAIC and FEMA flood financial information to assess the reliability of the information for our purposes. Because FEMA’s payments to WYOs are determined by applying various proxies to premiums written or claim losses, we identified differences between the written premiums and claim losses that the companies reported to FEMA and NAIC. We obtained from WYO company officials explanations of these differences and determined that they would not significantly impact the companies’ flood expenses. Further, to review the payments and expenses for the six companies selected, we converted FEMA’s fiscal year WYO payment data to calendar year amounts for comparison to calendar year actual expenses reported to NAIC; recalculated the expense payments reported by the six WYOs to FEMA on a test basis, using the written premium and claim losses incurred amounts the WYOs reported to FEMA and FEMA’s payment rates, all without exception; and interviewed officials of the WYOs regarding their flood operations, accounting for and assignment of expenses to the flood line, and reporting of flood line data to NAIC. To assist in comparing actual expenses to the expense payments, we adjusted the WYOs’ reported flood expenses in cases where, for example, companies offset their expenses incurred with the payments they received from FEMA. We found that the data the six companies submitted to NAIC and FEMA were, as adjusted by us, sufficiently reliable for our purposes. For the purposes of this audit, we considered profits to be to the difference between the amounts paid to the WYO companies and the companies’ actual flood expenses on a pretax basis. In determining profits, we excluded miscellaneous other companywide income and expenses. We did not audit the financial data the six WYOs submitted to FEMA, NAIC, or to us. However, the federal flood financial information the companies submitted to NAIC was included in financial statements prepared in accordance with statutory accounting principles that were audited by independent certified public accounting firms, which expressed unqualified opinions for those years covered by our review. We compared amounts in the audited financial statements for calendar year 2005 to 2007 to amounts the companies reported in their annual statements for earned premiums, losses incurred, and underwriting and loss adjustment expenses incurred for all lines of property and casualty insurance. The differences we identified did not significantly impact our analysis. Further, the federal flood financial information the companies submitted to FEMA was included in biennial financial statements prepared in accordance with generally accepted accounting principles that were also audited by independent certified public accounting firms who expressed unqualified opinions. We reviewed the audited biennial financial statements for four of the six companies that had submitted separately audited statements to FEMA. The differences we identified did not significantly impact our analysis. Marketing and Bonuses To evaluate the extent to which bonus payments to WYOs for increasing the number of flood policies they sell were based on WYOs’ actual marketing efforts, we discussed the bonus payment methodology with FEMA staff, WYOs, and other stakeholders and reviewed documents relating to the methodology used to make the bonus payments. We analyzed the bonus payments and evaluated the extent to which they could be attributed to the marketing efforts of WYOs or to other external factors, such as flood events and economic conditions. To determine whether the existing bonus formula benefited WYOs with fewer policies and years in NFIP, we compared those WYOs by size and year in the program to those receiving top bonuses. For the 10 WYOs that we selected to interview, we identified those that had submitted marketing plans or undergone a marketing operations review. We also asked whether the bonus was a major factor in their marketing efforts and whether they considered flood insurance to be a primary insurance line. Program Oversight To evaluate FEMA’s compliance with the Control Plan, we discussed procedures with appropriate FEMA staff, requested and reviewed all the documents that were required under the plan, and discussed these requirements with the WYOs and other stakeholders. To address FEMA’s oversight of the WYOs, we selected a sample of 10 WYOs that administered over 50 percent of the flood insurance policies written for the year 2007. Our sample included companies that covered the spectrum of WYOs—for instance, they differed in size based on premiums written, losses incurred, and overall rank in market share and included companies that did and did not use a vendor. We used a data collection instrument to review the required documents for the 10 WYOs selected for our review. Our data collection instrument included the four major components of FEMA’s Control Plan: (1) monthly data and financial reporting ; (2) claims reinspections performed by FEMA’s contractor.; (3) various audits by independent CPAs, including required biennial audits, audits for cause, and state insurance department audits; and (4) triennial operation reviews performed by FEMA staff. We used the 1999 Control Plan that was being used at the time of our review NFIP has a draft plan that it began developing in 2007. Alternative Administrative Structures In consultation with congressional staffers, we identified three possible alternatives that would incorporate a competitive feature. To evaluate the advantages and disadvantages of the three alternatives to the WYO program, we discussed the alternatives with staff within GAO; the WYOs in our sample; FEMA staff; and other stakeholders, such as flood insurance vendors and consultants. We conducted this audit from December 2007 to July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contacts and Staff Acknowledgments Staff Acknowledgments In addition to the contacts named above, the following individuals made key contributions to this report: Andrew E. Finkel (Assistant Director), Robert Owens (Assistant Director), Grace Haskins (Analyst-in-Charge), H. Donald Campbell, Emily Chalmers, Tony Eason, Fredrick Evans, Jeffrey Isaacs, Scott McNulty, Marc Molino, Roberto Pinero, Paul Revesz, and Carrie Watkins.
Since 2004, private insurance companies participating in the Federal Emergency Management Agency's (FEMA) Write-Your-Own (WYO) program have collected an average of $2.3 billion in premiums annually and, of this amount, have been paid or allowed to retain an average of $1 billion per year. Questions have been raised about FEMA's oversight of the program in light of the debts FEMA has incurred since the 2005 hurricanes. GAO placed NFIP on its high-risk list and issued several reports addressing the challenges the program faces. This report addresses the methods FEMA uses for determining the rates at which WYOs are paid, its marketing bonus system for WYOs, its adherence to financial control requirements for the WYO program, and alternatives to the current system. To do this work, we reviewed and analyzed FEMA's data and policies and procedures and obtained the views of select WYOs and flood insurance experts. FEMA does not systematically consider actual flood insurance expense information when it determines the amount it pays the WYO for selling and servicing flood insurance policies and adjusting claims. Rather, since the inception of the WYO program, FEMA has used various proxies for determining the rates at which it pays the WYOs. Consequently, FEMA does not have the information it needs to determine (1) whether its payments are reasonable and (2) the amount of profit to the WYOs that are included in its payments. When GAO compared expense payments FEMA made to six WYOs to the WYOs' actual expenses for calendar years 2005 through 2007, we found that the payments exceeded actual expenses by $327.1 million, or 16.5 percent of total payments made. Considering actual expense information would provide transparency and accountability over payments to the WYOs. FEMA has not aligned its bonus structure with its long-term goals for the program. The WYOs generally offered flood insurance when requested but did not strategically market the product as a primary insurance line. FEMA has not set explicit marketing goals beyond a 5 percent goal of increasing policy growth each year, and the WYO program primarily rewards companies that are new to NFIP for sales increases that may result from external factors, including flood events. The Government Performance and Results Act states that when results could be influenced by external factors, agencies can use intermediate goals to measure contributions to specific goals. Paying bonuses based on such intermediate targeted goals could bring the bonus structure more in line with FEMA's goals for the NFIP program. FEMA has explicit financial control requirements and procedures for the WYO program but has not implemented all aspects of its Control Plan. FEMA provides guidance for WYOs that is intended to ensure compliance with the statutory requirements for the NFIP and contains checks and balances to help ensure that taxpayer funds are spent appropriately. FEMA did most of the required biennial audits and underwriting and claims reviews but did not do most of the required audits for cause; state insurance department audits; and marketing, litigation, and customer service operational reviews. In addition, FEMA did not systematically track the outcomes of the various audits, inspections, and reviews that it performed for the 10 WYOs included in this review of FEMA's oversight of the program. Because FEMA does not implement all aspects of the Control Plan, it cannot ensure that the WYOs are fully complying with program requirements. Three alternative administrative structures could replace NFIP's payment arrangement with a competitively awarded contract that could lower costs for selling and servicing flood insurance policies and administering claims: (1) contracting with one or more insurance companies, (2) contracting with a single vendor, or (3) contracting with multiple vendors and maintaining the WYO network. Each alternative involves trade-offs in terms of the impact on the program's basic operations that would have to be considered.
Background Before proceeding further, Mr. Chairman, I would like to briefly explain the operation of FBF, which is administered by GSA’s Public Buildings Service (PBS). In 1975 , FBF replaced appropriations to GSA as the primary means of financing the operating and capital costs associated with federal space owned or managed by GSA. PBS charges federal agencies rent, the receipts of which are deposited in FBF. Congress exercises control over FBF through the annual appropriations process that sets annual limits on how much of the fund can be expended for various activities. In addition, Congress may appropriate additional amounts for FBF. The specific activities the fund is used for include space acquisition and the operation, maintenance, repair of, and improvements to, government-owned and -leased buildings managed by GSA. FBF rent revenues have grown from about $2.5 billion in fiscal year 1987 to about $4.8 billion in fiscal year 1997. Each year, as part of the budget process, PBS calculates a revenue estimate that includes an estimate of rental revenue. Under the federal budget process, PBS’ initial rental income estimate for a given fiscal year is made 18 months in advance. For example, PBS’ initial rental income estimate for fiscal year 1997 was made in the spring of 1995, using rental revenue estimates for fiscal years 1995 and 1996 as a starting point. At that time, however, PBS did not yet have actual rental income data for all of fiscal year 1995, which had not yet ended, and it also did not have actual rental income data for fiscal year 1996, which had not yet begun. Accordingly, as a starting point, PBS updated and adjusted previous estimates it had made for fiscal years 1995 and 1996 based on the most recent information it had at the time, and it made assumptions about events it believed would affect rental revenues in fiscal year 1997. Updated information PBS compiled for fiscal years 1995 and 1996 included expected (1) annualized changes in space inventories for fiscal year 1995, (2) changes in the inflation rate, and (3) changes in building delegations under which agencies pay their own building operating costs and are refunded from FBF for the operating cost portion of their rent payments. PBS obtained this information from GSA’s regions and various other sources. PBS’ assumptions included such factors as using a uniform national average for the number of months new space would be occupied as well as the rental rate. Accuracy of PBS’ Rental Revenue Estimates As you can see, Mr. Chairman, forecasting rental revenues is a complex process carried out 18 months in advance that must be made with incomplete information for the 2 years preceding the year being estimated and is based on assumptions about future events. Given these circumstances, it would be reasonable to expect some differences between actual and estimated rental revenues. In this regard, PBS’ historical trends of estimated revenue versus actual revenue show actual rent revenue (1) exceeded estimated rental revenues for each year from fiscal year 1987 through fiscal year 1993—except for fiscal year 1992, when actual rental revenues were less than estimated rental revenues by 1.6 percent—and (2) differed from the estimate by less than 2 percent for each year during this period, except for fiscal year 1990, when actual revenue exceeded the estimate by 2.8 percent. However, for each of fiscal years 1994 through 1997, PBS’ data showed that annual actual rental revenues were less than estimated rental revenues, ranging from an overestimate of about $110.7 million, or 2.4 percent, in fiscal year 1995, to an overestimate of about $422.1 million, or 8.2 percent, in fiscal year 1996. (See app. II.) For fiscal years 1994 and 1995, PBS overestimated rental revenues by a combined total of $308.1 million, that according to the PBS’ Chief Financial Officer, was absorbed by reducing planned expenditures and using carryover balances without the need for congressional action. In addition, PBS reported a combined overestimate of $773.5 million for fiscal years 1996 and 1997, which PBS dealt with as I will explain in a moment. It is important to note that the fiscal year 1996 and 1997 overestimation from the historical analysis does not match the amount reported to Congress in January 1997 because the revenue estimates were made at different times. In January 1997, PBS expected the total overestimation for fiscal years 1996 and 1997 to be $847 million. Subsequently, in July of 1997, PBS increased the anticipated overestimation for fiscal year 1997 by $86.8 million and reported an anticipated overestimation for fiscal year 1998 of about $109.2 million. This brought the total anticipated overestimation for fiscal years 1996 through 1998 to about $1.04 billion. expenses in fiscal years 1997 and 1998 by deferring planned expenditures until later years to offset the remaining $359.5 million. However, after it had closed its fiscal year 1997 books, PBS reported the actual budget impact of its overestimation to be $634.4 million for fiscal years 1996 and 1997, and reduced its fiscal year 1998 overestimation to $28.3 million. Weaknesses in PBS’ Revenue Forecasting Process As indicated, in January 1997, GSA informed Congress that it expected its total overestimate of rental revenue for fiscal years 1996 and 1997 to be $847 million. As shown in appendix I, PBS identified seven reasons for the overestimation and linked specific dollar amounts of the overestimate to each reason. For example, PBS attributed $209 million of the $847 million overestimate to rental reductions in fiscal year 1995 in 18 metropolitan areas that had not been factored into its original estimates. PBS provided documentation supporting the amount of the overestimation for six of the seven reasons. PBS could not provide data showing how the amount—$86 million—attributed to the remaining reason (i.e., that the original fiscal year 1995 rent revenue estimate was generally higher than actual fiscal year 1995 revenues) was developed. Although we examined the documentation PBS provided, we did not trace all the data compiled by PBS to explain its overestimation back to the original source documents. In July 1997, PBS reported increased overestimates of rental revenue for fiscal years 1997 and 1998 totalling $196 million, which, if accurate, would have brought the total overestimation for fiscal years 1996 through 1998 to over $1 billion. However, PBS did not identify the causes of the increased overestimation, and in January 1998, PBS identified the actual fiscal year 1997 budget impact of the overestimate for fiscal year 1997 to be only about $14.1 million and the estimated budget impact of the revised fiscal year 1998 overestimation to be about $28.3 million, for a combined total of $42.4 million. information provided by, GSA employees to reconstruct the reasons for deficiencies in the rental revenue estimates. To illustrate, one of the seven reasons for the overestimation identified by PBS was a change in assumptions about costs that was made in 1995 relative to the fiscal year 1997 rent revenue estimate. For example, one assumption changed was the estimated time that increased government-owned space would be occupied in a fiscal year. The time was changed from 6 months to 9 months, which resulted in an increase in the overestimation of rental revenue. However, PBS staff said that they could not recall who had authorized the change in the assumptions. Subsequently, PBS officials advised us that responsibility for the change in the assumptions was borne by the then PBS Commissioner. We could not locate any documentation explaining why the change was made. GSA’s Inspector General also noted that PBS lacked documentation for the assumptions made and methodology used to increase the revenue gap expected for fiscal year 1997. Use of national averages, rather than project-specific data, to forecast occupancy schedules and rental rates: For fiscal years 1996 and 1997, PBS reported that its use of national averages (which caused estimates of government-owned space increases to be too high) accounted for $142 million of the $847 million overestimation. For example, PBS assumed that it would receive rent for all space coming on line, for 9 months of the year, at the national average per square foot rental rate. In using national averages, PBS relied on less accurate data for estimating than if it had used project-specific data. In addition, in its calculations, the national average rental rate for government-owned space was changed from $40 to $44 per square foot without supporting documentation explaining the reason for the change. Additional problems with PBS’ rental revenue estimation process have also been identified. For example, in July 1997, Arthur Andersen reported that PBS lacked documentation for its budget methodology, including FBF, and had problems with its information and analysis systems as well as its pricing policies and practices. GSA’s Efforts to Improve Its Revenue Projection Process official, PBS will issue a directive on documenting the rental estimating process by April 1998. In our review of PBS’ fiscal year 1999 rental revenue estimate, we found that documentation had been prepared on the decisions, assumptions, and steps involved in the process. Office of Financial and Information Systems (FIS), with overall responsibility for the rental revenue forecasting process, was established: In July 1997, GSA issued an order establishing FIS with one of its responsibilities being to forecast rental revenue and monitor revenue status monthly to determine whether predictions of inventory changes, other technical assumptions, and income collected are occurring as anticipated. Within FIS, there is a Rent Team, with six people (two additional positions are authorized, but not yet filled) responsible for executing these duties. In the past, forecasting revenue was treated as a part-time task, and monitoring was done quarterly. In addition, in April 1997, PBS hired a Chief Financial Officer to oversee FIS. Also, each of GSA’s regions was directed to appoint a revenue manager to be responsible for this issue. According to a PBS official, as of February 20, 1998, 10 of the 11 regions have done so. Project-specific data is to be used in occupancy schedules and rental rates instead of national averages: In July 1996, PBS instructed each of its regions to submit monthly data on the changes expected in occupancy and rental rates for each property in its inventory. This would include known changes caused by government downsizing. However, PBS cautions that general estimates on downsizing that are not project-specific are too speculative to be used in making rental revenue forecasts. PBS intends to use project-specific data, whenever possible, to provide a more realistic fact-based estimate on which to base future rental revenues. New information system is being implemented to manage, track, and access data, with plans for a revenue forecasting module to be added to the system: By January 1998, PBS had installed its new information system, called the System for Tracking and Administering Real Property (STAR), in all its regions. PBS expects STAR to generate a more accurate inventory and greater integration of financial and operational data. According to PBS officials, they plan to develop a rental revenue forecast module for STAR. They expect this module to be completed by the spring of 1999 and to be used to develop the revenue projections for the fiscal year 2001 budget. we agree, since its rental revenue estimate is a forecast, it is unlikely to produce an estimate that is identical to actual rental revenue. While some variance is to be expected in any estimating process, variances that go beyond a certain level can be indicative of problems that need to be addressed. In this regard, we noted that PBS has not established an acceptable margin of error against which it can measure the success of its estimation process. Having such a benchmark, we believe, would put PBS in a better position to identify variances that need to be investigated so that it can explore and fix the causes of excessive variances, improve its estimation process, and determine its effectiveness over time. Recommendation We recommend that the Commissioner, PBS, establish an acceptable margin of error for its rental revenue estimates, as well as a process for exploring and resolving causes of variances outside the margin adopted. Mr. Chairman, that concludes my prepared statement. I will be happy to answer any questions the Subcommittee may have. PBS Reasons for the Overestimation of Revenue for Fiscal Years 1996-1998 Less leased expansion space was delivered than was expected, and at later dates than expected. Fiscal year 1995 rental reductions in 18 metropolitan areas were not factored into the original estimates. Estimates of the effect of government-owned-space increases were too high. The fiscal year 1995 rental revenue estimate was generally higher than actual fiscal year 1995 revenues. Because of the timing of the budget, these high estimates were used as the basis for fiscal years 1996 and 1997 projections. Assumptions concerning the costs of leased and government space were changed to make them less conservative. A technical error was made in calculating the effect of indefinite authority in the rental of space. Rental revenue decreases from buildings, or portions of buildings, becoming unoccupied were not factored into the original estimate. In July 1997, GSA increased its estimate of the fiscal year 1997 overestimation but did not identify the causes. In July 1997, GSA identified an overestimation for fiscal year 1998 but did not identify the causes. FBF Estimated and Actual Rental Income for Fiscal Years 1987-1998 Note 1: Rental income does not include reimbursables, outleasing, and miscellaneous income; therefore, these numbers are less than rental revenue in GSA’s financial statements. This historical analysis does not match revenue overestimation reported to Congress in January 1997 because the revenue estimates were made at different times. Note 2: N/A = Not available. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the General Services Administration's (GSA) overestimation of its rental revenue projects for the Federal Buildings Fund (FBF) in fiscal years (FY) 1996, 1997, and 1998, and the actions it is taking to improve its future revenue projections. GAO noted that: (1) GSA had documentation supporting the dollar amounts it attributed to six of the seven reasons it reported for the overestimation of rental revenue; (2) in addition, GAO and others identified several weaknesses in GSA's rental revenue estimation process; (3) GSA was aware of these problems and has taken corrective actions which GAO believes, if effectively implemented, should help improve future rental revenue estimates; and (4) further, GSA recently reported the actual budget impact of its rental revenue overestimation to be $634.4 million for FY 1996 and FY 1997, and reduced its FY anticipated overestimation substantially.
Background Older Americans—those in or approaching retirement—and other borrowers who default on their federal student loans are subject to a number of actions by Education to recover outstanding debt. Borrowers may elect a voluntary repayment option to avoid involuntary collection efforts, such as Social Security offsets. (See table 1). While Education administers federal student loans, other agencies may become involved in the event that a borrower fails to make repayment. For example, as described in table 1, Education coordinates with Treasury to offset a portion of federal payments to borrowers who have not made scheduled loan repayments. Federal payments subject to offset include federal tax refunds, certain monthly benefits—such as Social Security retirement and disability payments—and wages and retirement benefits for federal employees. The Debt Collection Improvement Act of 1996 centralized the collection of nontax debt, including defaulted federal student loans, at Treasury. Specifically, the Treasury Offset Program within Fiscal Service carries out the transactions for offsetting all federal payments for nontax debt. Offsets for student loan debt through the Treasury Offset Program began in 1999 and were first applied to Social Security benefits starting in 2001. After a defaulted loan is certified as eligible for offset, certain federal payments, such as any available tax refunds, are offset immediately. Borrowers with monthly federal benefits available for offset, such as Social Security benefits, are informed by mail that their benefits will be offset in 60 days and again 30 days before the offset is taken, allowing borrowers an additional 2 months to resume payment on their loan before offset begins. In addition, Education sends a notice which provides details on the loans eligible for offset and describes options a borrower has to avoid offset. Treasury assesses a fee for each offset transaction, which is subtracted from the offset payment. For fiscal year 2015, Treasury’s fee was $15 for each monthly offset of benefit payments and $17 for a single tax refund offset. Monthly Social Security benefit payments that are eligible for offset are the primary source of income for many older Americans at or near retirement. According to the Social Security Administration (SSA), Social Security benefits accounted for 90 percent or more of income for about 1 in 3 beneficiaries age 65 and older in 2014. Social Security’s retirement benefits, which individuals may claim as early as age 62, provide monthly income based on an individual’s work and earnings history and are intended to help ensure an adequate retirement income. Disability benefits replace a portion of an eligible workers’ income if they are unable to work due to a long-term disability. When individuals receiving Social Security disability benefits reach Social Security’s full retirement age— currently age 66 for people born in 1943-1954—their benefits convert from disability to retirement. Both types of monthly Social Security benefits are eligible for offset if the beneficiary is in default on a federal student loan. Social Security’s Supplemental Security Income benefits, which provide monthly cash assistance for eligible individuals with limited financial means, have been exempted from offset. Certain borrowers may be eligible to discharge their federal student loan debt because they are totally and permanently disabled, regardless of whether or not they are in default. For example, borrowers of any age receiving Social Security disability benefits are eligible for a Total and Permanent Disability (TPD) discharge if SSA has determined that they have a disability for which medical improvement is not expected. Borrowers who are approved for a TPD discharge are generally subject to a 3-year monitoring period during which the discharged loans may be reinstated for several reasons, including that the borrower earned income over a specified threshold. The value of the discharged loan is generally treated as taxable income at the close of the 3-year monitoring period. The Debt Collection Improvement Act of 1996 specified limits on the amount that Treasury can offset from monthly federal benefits. In 1998, Treasury further exempted all but 15 percent of Social Security benefit payments from offset. As a result, the amount of allowable offset is the lesser of 15 percent of the monthly benefit payment, the amount by which the benefit payment exceeds $750 per month, or the outstanding amount of the debt. For example, if a borrower with a Social Security benefit of $1,000 per month owes more than $150 in student loan debt, the borrower would have an offset of $150. This is because $150—equivalent to 15 percent of the benefit—is less than the amount of the benefit over $750, which is $250. In addition to the offset threshold, creditor agencies, such as Education, are permitted to grant relief in cases of financial hardship by certifying to Treasury that the offset allowable by law would result in financial hardship. Education established such a process in 2002 to grant financial hardship exemptions or reductions in offset. Education’s Collection Efforts in Fiscal Year 2015 According to Education data for fiscal year 2015, about $4.5 billion was collected by Education, private collection agencies, and guaranty agencies on defaulted federal student loans, excluding loan rehabilitations and consolidations. About half of this amount came from offsets of any federal payments through the Treasury Offset Program, including but not limited to Social Security offsets (see fig. 1). Just over 30 percent of Education’s total collections came from administrative wage garnishment, and about 20 percent came from voluntary payments made by borrowers who may have been in the process of making the required number of on-time monthly payments to eventually rehabilitate or consolidate their loans and emerge from default. According to Education officials, borrowers may also make voluntary payments to avoid being subject to other collections actions, such as administrative wage garnishment. In addition to these collections, Education publicly reports recoveries from defaulted loans when they are successfully rehabilitated or consolidated. Federal Student Loan Debt and Rates of Default, and Offset among Older Americans Fewer older Americans hold student loan debt, but the rate of increase in the number of older borrowers and the amount of their debt has far outpaced younger borrowers. According to Education data for fiscal year 2015, there were about 37.4 million borrowers under age 50 compared to about 6.3 million borrowers age 50 to 64 and 870,000 borrowers age 65 and older. Since fiscal year 2005, these figures represented an increase in the number of borrowers in the age 50 to 64 and 65 and older groups of 119 percent and 385 percent, respectively. In comparison, the growth rate for borrowers age 25 to 49 was 62 percent over this time period. The corresponding increase in the amount of federal student loan debt held by borrowers age 50 to 64 was from about $43 billion to $183 billion over this decade, more than a three-fold increase. Among borrowers age 65 and older, the increase in the amount of federal student loan debt was even larger— it grew from more than $2 billion in fiscal year 2005 to almost $22 billion in fiscal year 2015, about a ten-fold increase. The loans on which older borrowers have defaulted may have either been for their own education or for their children’s education through Education’s Direct PLUS Loan program. In fiscal year 2015, compared to younger borrowers a greater share of older borrowers were in default on their student loan debt and became subject to offset from any federal payment, including federal tax refunds and Social Security benefits. As shown in figure 2, the share of borrowers age 65 and older in default and offset in fiscal year 2015 was 37 percent and 5 percent, respectively. By contrast, the share of borrowers under age 50 in default and offset was 17 percent and 2 percent, respectively. Growth in Social Security Offsets and Prevalence of Disability Benefits In addition, our analysis of data we linked from Education, Treasury, and SSA shows that the number of borrowers, especially older borrowers, who have experienced offsets of Social Security benefits to repay defaulted federal student loans has increased over time. From fiscal years 2002 through 2015, the number of defaulted federal student loan borrowers of any age with Social Security offsets increased from about 36,000 to 173,000. For those under age 50, the number of borrowers with Social Security offsets increased from about 15,000 to 59,000 over this time period—a three-fold increase. For those in the age 50 to 64 and 65 and older groups, the increase was greater—about 407 percent and 540 percent, respectively. In total for fiscal year 2015, about 114,000 borrowers age 50 and older had Social Security disability, retirement, or survivor benefits offset to repay defaulted federal student loans. Among those subject to Social Security offsets, most received disability benefits rather than retirement or survivor benefits. In fiscal year 2015, 69 percent of defaulted borrowers of any age whose Social Security benefits were offset received disability benefits, including 80 percent of those 50 to 64. Since disability benefits are automatically converted to retirement benefits once beneficiaries reach their full retirement age, the vast majority—95 percent—of defaulted borrowers age 65 and older received retirement or survivor benefits in fiscal year 2015. Of these borrowers, about 23 percent had previously received disability benefits. Older Americans Often Had Held Student Loan Debt for Decades Prior to Offset, and Many Had the Maximum Possible Amount Withheld through Social Security Offset Among borrowers 50 and older at the time of their initial Social Security offset, about 43 percent had held their student loans for 20 years or more. Three-quarters of older borrowers owed loans only for their own education, and most owed less than $10,000 at the time of their initial offset. The typical monthly offset was slightly more than $140 for older Americans, and almost half of those had the maximum possible reduction, equivalent to 15 percent of their Social Security benefit payment. From 2004 to 2014, the population of older Americans in Social Security offset became increasingly composed of those with Social Security income below the median benefit amount. Many Older Americans Had Held their Student Loans for 20 Years or More at the Time of Initial Offset About 43 percent of older student loan borrowers with a Social Security offset had held their student loans for 20 years or more, and about 80 percent had held their loans for 10 years or more. According to linked data from the Treasury Offset Program, Education’s National Student Loan Data System, and the Social Security Administration from fiscal years 2001 through 2015, the length of time borrowers had held student loans that were in default at the time of their first Social Security offset payment varied as shown in figure 3. These older borrowers generally took out their loans at traditional mid- career working ages, and relatively few of them took out their loans at a traditional college-going age. Across all borrowers 50 or older, 61 percent became subject to offset for loans taken out in their 30s and 40s. For borrowers 50 to 64 at the time of their initial offset, 8.2 percent had outstanding loans that were taken out when they were under 25. Among borrowers 65 and older, 1.4 percent had outstanding loans that were taken out when they were under 25. Most Older Americans Subject to Social Security Benefit Offset Took Out Loans for Their Own Education and Owed Less than $10,000 at the Time of Initial Offset Older borrowers who became subject to Social Security offsets predominately defaulted on loans for their own education. Among older borrowers subject to offset of their Social Security benefits, more than three-quarters had defaulted on loans they took out for their own education rather than on loans they took out for a child’s education, known as Parent PLUS loans. For borrowers 50 to 64 at the time of initial offset, 82 percent had only ever held loans taken out for their own education. A greater proportion of borrowers 65 or older had Parent PLUS loans, but even among this group, about two-thirds of the borrowers never had Parent PLUS loans. Total federal student loan debt for most older Americans who became subject to offset was less than $10,000, while a small percentage owed $50,000 or more. Initial balances tended to be slightly higher among borrowers 65 and older at the time of their initial offset compared to those 50 to 64. (See fig. 4). Many Older Americans Subject to Social Security Offset for Student Loan Debt Have the Maximum Amount Withheld About 44 percent of borrowers 50 and older at the time of their initial offset saw the maximum possible amount of their Social Security benefit withheld, equal to 15 percent of their benefit payment. The offset for the remaining 56 percent was less than the maximum 15 percent of their benefit payment. Most of these borrowers had between 10 and 15 percent of their benefit payment offset. A small proportion of borrowers (about 5 percent of those 50 to 64 and 4 percent of those 65 and older) were approved for a financial hardship reduction and paid a reduced amount of offset compared to what they would have otherwise. The typical monthly Social Security benefit offset for older Americans across fiscal years 2001 through 2015 was slightly more than $140. The minimum amount was $25, which is the lowest amount at which Treasury will initiate an offset. For borrowers 65 or older at their initial offset, monthly payments ranged up to about $240 (see fig. 5). At the median, monthly offsets were similar for those 50 to 64 and 65 and older—$142 and $146, respectively. Older Americans in Social Security Offset Increasingly Have Social Security Income Below the Median Benefit Amount A growing share of Social Security beneficiaries is potentially subject to offset because the share of beneficiaries who have benefits below the protected threshold of $750 has declined. Because of the offset threshold, those receiving monthly benefits of $750 or less who hold defaulted federal student loans are not subject to offset. However, unlike Social Security benefits which are increased on an annual basis through cost of living adjustments, the Social Security offset threshold of $750 has not been adjusted. As the relative value of the offset threshold has declined over time, it applies to a smaller share of Social Security beneficiaries. Across all Social Security beneficiaries in 2004, about 42 percent of those receiving Social Security disability benefits and about 33 percent of those receiving retirement benefits had monthly benefits of less than $750 a month and thus could not become subject to offset. By 2014, however, the share of all beneficiaries below the $750 threshold had fallen to 19 percent of disability beneficiaries and 16 percent of retirement beneficiaries. Over time, the population of older Americans in Social Security offset has become increasingly composed of those with Social Security incomes below the median benefit amount. In 2004, 21 percent of older Americans subject to Social Security offset received benefits that would have placed them in the bottom half of the overall benefits distribution before considering the amount withheld through offset (see table 2). By 2014, about 60 percent of older Americans subject to offset received benefits that, prior to offset, would place them below the median Social Security benefit amount—about $1,070 for disability beneficiaries and $1,320 for retirement beneficiaries in 2014. Social Security Offsets Were a Small Share of Education’s Collections and Primarily Paid down Fees and Interest as Many Borrowers Remained in Default after 5 Years A small share of Education’s total collections from the Treasury Offset Program came from Social Security offsets. Nearly three-quarters of the collections through Social Security offset were applied to Treasury Offset Program fees and to interest on the remaining loan balance, rather than to loan principal. With respect to outcomes for older borrowers, about half of borrowers remained in offset for 1 year or less while others remained in offset for multiple years. Over a 5 year time period after becoming subject to Social Security offset, nearly one-third of older borrowers were able to pay off their loans or obtain a disability discharge. However, other older borrowers remained in default on their student loans, and some had their loan balances increase over time despite the reductions to their Social Security benefits. Social Security Offsets Were a Small Share of Education’s Collections through the Treasury Offset Program, but a Relatively Larger Share of These Offsets Went toward Program Fees Data from Treasury show that Education collected about $171 million in Social Security offsets in fiscal year 2015, which amounted to a small share of the agency’s total collections from the Treasury Offset Program (see fig. 6). In total, Education collected almost $2.3 billion from offsets of any kind. The $171 million collected from Social Security offsets was equivalent to about 8 percent of this total. The vast majority of offsets for Education debt— nearly $2.1 billion, or about 91 percent—were from federal tax refunds. As shown in figure 6, a relatively larger share of the total amount collected through Social Security offsets went toward Treasury Offset Program fees. In fiscal year 2015, the fee for each Social Security offset was $15 compared to $17 for each federal tax refund offset. Because offset fees are assessed per transaction, a borrower subject to monthly Social Security offsets could pay up to $180 per year in fees compared to $17 for a single federal tax refund transaction. According to data from Treasury, offset fees collected through the Treasury Offset Program amounted to about 11 percent of Social Security offsets collected for Education debt compared to 1 percent for federal tax refund offsets. More Than 70 Percent of the Amount Collected through Social Security Offset Was Applied to Borrowers’ Fees and Interest Collections on defaulted student loans through Social Security offset were applied primarily to borrowers’ fees—including Treasury Offset Program fees, as well as other fees charged to defaulted borrowers by Education—and interest. Treasury’s Bureau of the Fiscal Service retains the Treasury Offset Program fee and sends the remainder of the offset to Education. Education officials said that for each student loan, it applies the offset first to any outstanding fee balance, then to accrued interest, and then to principal. Of the approximately $1.1 billion collected through Social Security offsets from fiscal years 2001 through 2015 from borrowers of all ages, about 71 percent was applied to fees and interest— 12 percent to fees and 59 percent to interest—compared to 28 percent that was applied to principal. Among borrowers 50 or older at the time of initial offset, 53 percent had no portion of their offset payments applied to principal. This figure was even higher among older borrowers whose monthly benefit was below the poverty guideline prior to offset—68 percent of these borrowers had the full amount of their offset payments applied to fees and interest only. In contrast, 23 percent of borrowers 50 and older had the majority of their offset payments applied to principal. These borrowers came disproportionately from those whose monthly benefit was above the poverty guideline even after offset. About Half of Older Americans with Defaulted Student Loans Remain Subject to Social Security Offsets for 1 Year or Less, but Those with Larger Balances Tend to Remain Longer About half of older Americans who had Social Security offsets to repay student loan debt were no longer subject to offset within a year, while slightly more than one-third remained subject to offset for 2 years or more. Looking across the approximately 126,000 borrowers 50 and older whose initial Social Security offset was in fiscal years 2001 through 2010, 45 percent were subject to offset for a year or less, including 8 percent who were only subject to offset for a single Social Security payment. In contrast, 37 percent had their Social Security benefits reduced for multiple years. Specifically, about 25 percent were in offset for 2 to 5 years and 12 percent were in offset for 5 years or more. Results were similar between borrowers 50 to 64 and borrowers 65 and older (see fig. 7). Older borrowers who were subject to offset for shorter periods tended to owe less than those who remained subject to offset for longer periods of time. Specifically: Less than 1 year: These borrowers had a median loan balance of about $6,000 and an average balance of $12,150 at the time of initial offset. 2 to less than 5 years: These borrowers had a median balance of $8,000 and an average balance of $15,250. 5 or more years: These borrowers had a median balance of $12,800 and an average balance $22,450. Almost One-Third of Older Americans in Social Security Offset Paid Off or Discharged their Student Loans, but About 36 Percent Were Still in Default after 5 Years Many older borrowers had paid off or discharged their debt 5 years after their initial Social Security offset, but others remained in default and offset. Among those 50 and older at the time of their initial offset, about 32 percent had paid off or discharged their debt due to disability or school closure or otherwise closed their loans after 5 years for reasons other than death. An additional 13 percent died while their loans were outstanding (see fig. 8). The remainder—about 55 percent—had loans that were still open 5 years after their initial offset. Most of these borrowers with open loans were in default, but others had emerged from default by rehabilitating or consolidating their loans. Specifically, about 36 percent of those 50 and older at the time of their initial offset were in default after 5 years, including 20 percent who were still in offset. A small share—about 10 percent—was able to rehabilitate or consolidate their loans and was in repayment. As shown in figure 8, older borrowers who had been in offset but then paid off their loans had substantially smaller outstanding loan balances at the time of their initial offset compared to other older borrowers subject to offset. For example, the median outstanding balance for older borrowers who were in offset but paid off their loans within 5 years was $2,379 compared to a median outstanding balance of $11,838 for borrowers who were still in offset. Some Older Americans in Offset Have their Student Loan Debt Increase over Time Among the 55 percent of older Americans who still had student loans open 5 years after their initial Social Security offset, most had made some progress toward paying down their loan balances, but the loan balances of others increased over time. Taking into account Social Security offsets as well as any other source of payment on a borrower’s loans, such as tax refund offsets or voluntary payments, the majority (60 percent) of these borrowers had decreased their loan balances. The loan balances of the remaining 40 percent grew because the payments on their loans from all sources did not keep up with accruing interest. Borrowers who remained in Social Security offset after 5 years tended to have made more progress in paying down their loan balances compared to other borrowers who still had open loans but were no longer in offset. For example, some borrowers may no longer have been in offset because they rehabilitated or consolidated their loans, but were then in forbearance or deferment and, thus, were not making payments. Specifically: Borrowers who remained in offset: Among borrowers 50 and older who still had offsets after 5 years, 32 percent had their loan balances increase after considering all sources of payment on their loans. Borrowers with a financial hardship exemption from offset: 48 percent of borrowers 50 and older who still had open loans but had secured a hardship exemption—and thus were no longer making payments through Social Security offset—had a greater loan balance after 5 years. Borrowers in forbearance or deferment: Borrowers 50 and older who exited offset by rehabilitating or consolidating their loans but were in forbearance or deferment at the end of 5 years—and thus not making payments—fared particularly poorly, as 67 percent owed more than they did when they entered offset. More borrowers who were in Social Security offset for several years paid down principal with their offsets than borrowers who were briefly in offset, but some borrowers had not paid any principal after years of offsets. Among borrowers age 50 or older who stayed in offset for less than 1 year, 60 percent paid only fees and interest. For those in offset for more than 5 years, about one-third paid only fees and interest with their offsets, while about two-thirds paid some principal. Program Design May Impact Retirement Security for Older Borrowers, Including Those Seeking Relief Permitted for Permanent Disability or Financial Hardship A growing number of older borrowers may experience financial hardship in the years leading up to or during retirement because the Social Security offset threshold has not been adjusted for increases in costs of living since program provisions were implemented by regulation in 1998. In addition, many older Americans subject to offset may be eligible for a TPD discharge but they have not applied for one, and Education is taking steps to reduce the numbers of borrowers who have not applied. Education is also taking steps to automatically suspend offsets for certain disabled borrowers, but these steps could adversely affect borrowers at an older age. This is because Education does not provide these borrowers with information that would help them make more informed choices about applying for a TPD discharge. Further, for those who apply for a TPD discharge, key requirements of the 3-year monitoring period are not clearly communicated. As a result, older borrowers and others with disabilities may not complete required documentation to continue receiving this relief. Finally, Education established a process for granting financial hardship exemptions or reductions from offset, but they do not provide borrowers information about this option unless requested or review these exemptions once granted. A Growing Number of Older Borrowers with Social Security Offsets May Experience Financial Hardship because the Offset Threshold Is Not Adjusted for Increases in Costs of Living Older borrowers who remain in offset may increasingly experience financial hardship. Such is the case for a growing number of older borrowers whose Social Security benefits have fallen below the poverty guideline because the offset threshold is not adjusted for increases in costs of living. The threshold for Social Security offsets was established to prevent undue financial hardship on borrowers who rely on benefits for a substantial part of their income and who may be unable, rather than unwilling, to repay debts. This is consistent with the policy underlying the Social Security program that benefits are intended to help ensure older Americans have adequate retirement incomes and do not have to depend on welfare. According to SSA, Social Security benefits represented 90 percent or more of total income for about one-third of beneficiaries 65 and older in 2014. At the time it was set, in 1998, the threshold for Social Security offsets was above the poverty guideline— $750 a month represented about 112 percent of the poverty guideline for a single adult that year. However, in the absence of cost of living adjustments, the relative value of the offset threshold has declined over time to well below the poverty guideline. In 2016, the poverty guideline for a single adult equated to a monthly income of about $990, and the $750 threshold represented about 76 percent of this amount. Consequently, an increasing number of older Americans subject to Social Security offsets received benefits below the federal poverty guideline. In fiscal year 2004, about 8,300 borrowers in the 50 and older age category had benefits below the poverty guideline compared to almost 67,300 in fiscal year 2015 (see fig. 9). As a share of borrowers in the 50 and older age category, this growth was equivalent to an increase from 38 percent in fiscal year 2004 to 64 percent in fiscal year 2015. In addition, as shown in figure 9, a growing number of these older borrowers already received Social Security benefits below the poverty guideline before offsets further reduced their income. Proposals to adjust Social Security offset provisions—such as by indexing the offset threshold—have been made by Education and proposed in legislation. In October 2015, Education proposed that the Social Security offset threshold be indexed to inflation. In its support for this proposal, Education noted that the Debt Collection Improvement Act of 1996 recognizes that Social Security is “a key source of income for many disabled and elderly Americans.” In addition to Education’s proposal, legislation was introduced in October 2015 to index the Social Security offset threshold to inflation. It is also important to recognize that adjusting Social Security offset provisions would reduce Education’s recoveries from Social Security offsets. If the offset limit had been indexed to match the rate of increase in the poverty guideline, 62 percent of all older borrowers whose Social Security benefits were offset for federal student loan debt in fiscal year 2015 would have kept their entire benefit and 13 percent would have had a smaller offset. In total, our analysis found that Education would have collected about 40 percent of the amount collected through Social Security offsets for borrowers of any age in fiscal year 2015. Many Older Borrowers Subject to Offset May Be Eligible for a Total and Permanent Disability Discharge under Education’s Process but Have Not Applied Older Americans subject to Social Security offset may be eligible to have their student loan debt discharged because they are severely disabled. According to our analysis of linked data from Education, Treasury, and SSA, the large majority of older borrowers in offset who had their student loans discharged did so through Education’s total and permanent disability (TPD) discharge process. For those 50 and older who were initially subject to offset between fiscal years 2001-2010, TPD discharges represented about 90 percent of all non-death related discharges within 5 years after the initial offset. In total, after 5 years, about 8 percent of those 50 and older at the time of their initial offset applied for and successfully obtained a TPD discharge of their loans, while about 6 percent more were initially approved, but were still in the conditional 3-year monitoring period. Education’s TPD discharge process permits such discharges for certain recipients of Social Security disability benefits of any age. Effective July 1, 2013, the eligibility criteria for TPD discharges were modified to include borrowers receiving Social Security disability benefits if SSA has determined they have a disability in which medical improvement is not expected. Borrowers who receive disability benefits but are not in this status, as well as other borrowers who do not receive disability benefits, may still apply for a TPD discharge based on a physician’s certification or a determination from the Department of Veterans Affairs. Under Education’s TPD process, when borrowers alert their loan servicer that they are disabled they are referred to Education’s centralized TPD servicer who provides them application materials. According to Education officials, effective from April 2016, offsets are suspended on the borrower’s loans during the 120-day application period. Once borrowers submit their completed application, Education determines if they are eligible for a discharge. As described earlier, borrowers approved for a TPD discharge are generally subject to a 3-year monitoring period during which the discharged loans may be reinstated for several reasons, including that the borrower earned income over a specified threshold. Once the 3-year monitoring period has been completed and the discharge processed, the loans cannot be reinstated. The value of the discharged loans is generally treated as taxable income at the close of the 3-year monitoring period, as described in the sidebar. While TPD discharges represent the largest share of loan discharges for older Americans in offset, data from Education indicate that a considerable number of borrowers of any age, including those age 50 and older, who are eligible for a TPD discharge have not fully completed the application process. Education has taken steps to identify and conduct outreach to such borrowers of all ages, many of whom are certified for offset. Specifically, in December 2015 Education began matching NSLDS data with SSA records to identify borrowers who receive disability benefits and who are eligible for a TPD discharge because SSA had determined that their medical improvement was not expected. Once identified, borrowers were sent a letter explaining they are eligible for a TPD discharge and describing the actions they must take to apply. Based on this data matching, Education reported in April 2016 that it had initially identified approximately 387,000 borrowers of any age eligible for a TPD discharge. Of these, over 100,000 borrowers were in default and had been certified for offset. According to Education, as of July 31, 2016, TPD discharge applications were sent to about 234,000 borrowers based on the data matching and slightly more than 19,000 applications had been submitted and approved. Education officials said that nearly 1,800 additional applications had been submitted but not yet approved, generally because the agency was in the process of following up to obtain missing signatures on these applications. Approximately 213,000 borrowers who were sent forms had not yet applied. Education officials said that the agency will do additional follow up to increase the number of applications submitted and subsequently approved. According to Education officials, as of October 2016, an additional approximately 7,000 borrowers were identified as being eligible for a TPD discharge through the quarterly data match and were certified for offset. New Efforts Education Is Taking to Automatically Suspend Offsets for Certain Disabled Borrowers May Adversely Affect Borrowers at Older Ages As part of their data-matching effort described above, Education officials said they are immediately suspending offsets for borrowers of any age identified as receiving Social Security disability benefits for a condition in which medical improvement is not expected. According to Education officials, the agency decided in April 2016 to suspend offsets of any federal payments through the Treasury Offset Program for borrowers identified as eligible for a TPD discharge through the data-matching effort, regardless of whether or not the borrower returns the application form. The suspension from offset for those eligible but not approved for a TPD discharge continues for as long as Education identifies through its quarterly data match that the borrower has a disability in which medical improvement is not expected. During the time that offsets are suspended a borrower’s loan continues to be in defaulted status, and Education officials said that interest would continue to accrue on a borrower’s loan balance. Education officials also said that once disabled borrowers are converted to Social Security retirement benefits at their full retirement age—currently age 66 for people born in 1943-1954—offsets would resume unless the borrower applied and was approved for a TPD discharge or a financial hardship exemption. Using the linked NSLDS, Treasury Offset Program, and SSA data, we identified about 32,000 borrowers who were 50 and older at the time of their initial offset and who had a disability in which medical improvement is not expected, but who had not applied for a TPD discharge. Although Education is now automatically suspending offset payments for borrowers who are TPD discharge-eligible, it has not taken steps to inform borrowers about key information. For example, Education has not informed borrowers that they are suspended from offset even if they do not apply for the TPD discharge. Further, Education has not provided information to borrowers about the potential consequences from continuing to accrue interest without applying for the TPD discharge, or that their offsets may later resume if their benefits are converted. Education’s Federal Student Aid division’s strategic goals include providing superior service and information to borrowers to support customers’ decision-making. In addition, Standards for Internal Control in the Federal Government state that the agency should externally communicate the necessary quality information to achieve the entity’s objectives, which in this case is to provide offset relief to borrowers eligible to receive it. Education officials said that the agency does not have written guidance on the suspension process and conducts the suspension by using the data match to directly inactivate offsets for borrowers in Fiscal Services’ system. Without any communication from Education, some borrowers who previously had Social Security offsets suspended while receiving Social Security disability benefits could be surprised by a reduction in their monthly Social Security benefits as their offsets resume once they begin receiving retirement benefits. Providing information to borrowers on the potential for offsets to resume once they begin receiving retirement benefits—and that interest on their loan will have continued to accrue in the interim—will allow borrowers to make a more informed choice about whether to apply for the TPD discharge. Key Requirements of the TPD Discharge Process are Unclear, and Many Eligible Older Borrowers who Are Initially Approved Have Their Loans Reinstated Unclear requirements for Education’s TPD discharge program may affect even those older borrowers in offset who do apply for relief, particularly for completing the 3-year monitoring period. Further, the effects of Education’s program design for TPD discharge are not limited to older borrowers but also impact disabled borrowers of any age who apply for the discharge. Using summary data from Education for all borrowers who applied for TPD discharge, we identified that almost 110,000 individuals were approved for TPD discharge in fiscal year 2014, while an additional almost 103,000 were approved in fiscal year 2015. The total loan balances discharged in those years were over $2.7 billion and nearly $2.6 billion, respectively. Education’s data show that a large number of those approved for a TPD discharge had their loans reinstated during the 3-year monitoring period. According to summary data provided by Education’s TPD servicer, in fiscal year 2015, 61,536 borrowers initially approved for a TPD discharge had loans reinstated during the 3-year monitoring period with a total value of about $1.2 billion. Further, our analysis of NSLDS data showed that about 20 percent of borrowers 50 and older at the time of their initial offset, and who were in the 3-year monitoring period from July 1, 2008 through the end of fiscal year 2012, later had their loans reinstated. As shown in figure 10, the vast majority of borrowers of any age whose loans were reinstated—98 percent in fiscal year 2015—had this occur because they did not submit the annual income verification form. We found that the high number of loans reinstated because the borrower did not provide the annual income verification form results from unclear annual reporting requirements. Specifically, documentation provided by Education to borrowers in the 3-year monitoring period does not clearly and prominently state all requirements to report income annually. Federal agencies are directed to use language that is clear, concise, and well- organized that the public can understand in documents that explain how to comply with requirements the federal government administers or enforces. Moreover, Standards for Internal Control in the Federal Government state that the agency should externally communicate the necessary quality information to achieve the entity’s objective. However, we found that the forms the TPD servicer provides to borrowers may not clearly communicate all requirements to avoid loan reinstatement. In particular, the TPD discharge approval form sent to borrowers states that violating certain requirements will result in loan reinstatement, such as earning income from employment above the poverty guideline amount for a family of two in their state. However, the additional annual reporting requirement may be unclear because the form does not explicitly state that the loan will be reinstated if the borrower does not return the annual income verification form—even if they have no earnings—to document that their earnings from employment are below the poverty guideline for a family of two in their state. In addition, we found that the annual income verification form sent by Education’s TPD servicer each year during the 3- year monitoring period does not state that failure to submit the form will result in loan reinstatement (see app. IV for a copy of documentation provided by Education’s TPD servicer). According to Education, if a borrower does not return the income verification form, the TPD servicer resends the form two additional times, but the follow up attempts do not include any additional language to alert the borrower that failure to return the form will result in loan reinstatement. For borrowers who fail to return the annual income verification form, their loans are reinstated and offsets may resume. The number of reinstatements may continue to grow as more borrowers are initially approved through Education’s recent data matching and outreach efforts unless unclear annual reporting requirements are addressed. Borrowers whose loans are reinstated have 1 year to appeal to have their case re-evaluated, but most do not. The notice sent to borrowers informing them of the loan reinstatement provides the reason for the reinstatement and information on how to appeal. The borrower may appeal by mail or online through Education’s TPD servicer’s website. However, most borrowers who have their loans reinstated do not successfully appeal. According to Education’s data for borrowers of any age in the 3-year monitoring period, 20,368 reinstatement appeals were approved in fiscal year 2015, but 62,303 loans were reinstated during the prior fiscal year. Borrowers who do not appeal their loan reinstatement may have offsets resume. In addition, even with improvements to clarify forms sent to borrowers, it may be difficult for disabled borrowers to comply with documentation requirements. Although Education officials recognize that the population of borrowers approved for a TPD discharge may have difficulty in providing documentation due to the severity of their disabilities, borrowers must take action to manually submit the income verification form each year during the 3-year monitoring period. In contrast to manual activities, Standards for Internal Control in the Federal Government state that automated control activities tend to be more reliable because they are less susceptible to human error and are typically more efficient. While Federal Student Aid’s responsibilities include improving operational efficiency and the quality of service for customers across the entire student aid life cycle, the agency has not taken steps to reduce burdens for disabled borrowers in providing the income verification form. In contrast for another process that involves income verification, Education has taken steps to improve operational efficiency and the quality of customer service by reducing burdens on borrowers through a tool that automatically identifies income information. Specifically, the application process for income-driven repayment plans includes a feature—the IRS Data Retrieval Tool—to transfer income information from a borrower’s federal tax return. According to Education, this tool has helped simplify and streamline the process for borrowers while improving both speed and accuracy. Because Education has not included such a method to automate annual income verification and streamline the annual income verification process for the TPD discharge, Education has not made it easy for some borrowers to comply with the requirements of the 3-year monitoring period and avoid having their loans reinstated and offsets resume. Education’s annual income verification process may also result in under- reporting of earnings from some borrowers. For example, Education does not independently verify earnings for borrowers who submit the form and report they have no earnings. The borrower is required to sign and return the form certifying they had no earned income during the specified time period and is notified that there can be a penalty for making false statements or misrepresentations. However, Education officials said that they do not take action to verify this reported information. Borrowers who do report earnings are required to provide supporting documentation, such as their W-2. Education’s current verification process also does not address instances in which a borrower with income from multiple employers only includes income and supporting documentation from some, but not all, of them. This could result in borrowers receiving a TPD discharge when their loan should have been reinstated. Automating verification could also improve Education’s internal controls by addressing situations where people may fail to report some or all income. Education Neither Provides Older Borrowers Information about the Financial Hardship Exemption Unless Requested Nor Does it Review These Exemptions after Initial Approval Aside from a loan discharge for disability, older borrowers who are subject to offset may obtain relief by requesting a financial hardship exemption or reduction through a process established by Education, subject to certain requirements. This relief option is also available to borrowers under age 50. Borrowers who initiate contact with Education and state that they are experiencing a financial hardship due to Social Security offset are sent an application package to document their income and expenses. Borrowers with Social Security benefits below the poverty guideline as well as borrowers with higher benefits may apply. As shown in figure 11, Education determines eligibility based on a comparison of an individual’s documented income and qualified expenses, rather than a specific income threshold. Education officials said they immediately suspend or reduce Social Security offsets for borrowers making a first- time request while their application is processed. However, for borrowers who have previously applied but had offsets restarted, offsets continue during the subsequent application submission and review process. Once a borrower submits the required documentation on income and qualified expenses—such as housing, utilities, health care, and transportation—Education then compares a borrower’s income and expenses and generally uses IRS collection standards to determine eligibility. If the application is approved, the borrower will either be exempted from offset or have a reduced offset. These borrowers’ loans are still considered to be in defaulted status, and interest continues to accrue on them. According to Education’s data, more borrowers request a financial hardship exemption rather than a reduction in offset. Education reported that in fiscal year 2015, it received 12,573 requests for a financial hardship exemption from Social Security offsets from borrowers of all ages. During that same time frame, Education also received 2,631 requests for a reduction in Social Security offsets. While Education officials said they do not formally track the volume of hardship exemption and reduction approvals, they estimated that the majority of materially complete applications are approved. According to our analysis of NSLDS and Treasury Offset Program data, among borrowers 50 and older at the time of their initial Social Security offset, 16 percent obtained a hardship exemption and 5 percent obtained a hardship reduction at some point while they were in offset from fiscal years 2001 through 2015. However, we found that Education does not make information about the existence of the financial hardship exemption option or application process generally available to borrowers subject to Social Security offset. Although borrowers who are subject to offset may request a financial hardship exemption or reduction, information about this option is not generally available because neither Education’s website nor forms sent to borrowers regarding offset inform them about these options. If borrowers are aware of this option—for example, as a result of information from an advocacy group–and contact Education or their servicer and state they are experiencing a financial hardship due to offset, then Education or the servicer would provide information to borrowers about the option and application process, including the Financial Status Statement to document their income and expenses. Standards for Internal Control in the Federal Government state that agencies should externally communicate necessary information to achieve the agency’s objectives. Although Education has established a financial hardship exemption option, officials acknowledged that the agency has not taken steps to proactively inform borrowers about this option or process. For example, Education’s website does not include information on the hardship exemption process in relation to the Treasury Offset Program. In addition, the financial hardship exemption application form is not included under “Offset Forms” in the website’s section on “Forced Collections.” Education officials said that borrowers may use the Financial Disclosure Statement for Wage Garnishment Hearings available on the website— instead of the Financial Status Statement sent by Education to those who initially contact and inform the agency of their financial hardship—to submit their income and expenses for the financial hardship exemption review. However, this option is not noted on the website. Moreover, the offset notice sent to borrowers does not provide information on the financial hardship exemption option or process. This notice is sent to borrowers 65 days prior to the start of an offset and informs them of options for objecting to collection of the debt. In contrast, the notice IRS sends to individuals with delinquent federal tax debt subject to the Federal Payment Levy Program provides information on how to avoid having Social Security income withheld in cases of financial hardship. By providing information about the existence of the hardship exemption option and its application process, Education could better serve borrowers who have little or no discretionary income—including those receiving benefits below the poverty guideline—and who may be eligible for permitted relief. Borrowers who apply can be approved for a financial hardship exemption or reduction for a 1-year period, but these exemptions, once granted, are not subject to further review and remain in place indefinitely because there is no annual review process. According to Education officials, financial hardship exemptions and reductions are intended to be subject to an annual review. The approval letter sent to borrowers who are granted a financial hardship exemption or reduction states that their financial status will be reviewed again at the end of 12 months and that the borrower will need to submit a new Financial Status Statement or their offsets will resume. Standards for Internal Control in the Federal Government also state that the entity determines an oversight structure to fulfill responsibilities set forth by applicable guidance. However, according to Education officials, borrowers are not prompted to resubmit the form and, as a result, may continue to receive the hardship exemption or reduction indefinitely. Education officials said their Debt Management and Collection System requires enhancements in order to perform an annual review and there are no definitive plans to implement this feature. In the absence of an annual review, some borrowers who no longer qualify for a financial hardship exemption or reduction based on their income and expenses may continue to avoid offset or pay a reduced amount. Conclusions A growing number of individuals who are 50 and older have defaulted on their student loan debt and become subject to Social Security offset, which can have serious financial consequences for those in or approaching retirement, especially for those at lower income levels or who are unable to work and make up for lost income. Our analysis shows that more than half of this population was disabled and receiving Social Security disability benefits that were reduced through offset, while others may have been retired but relying on Social Security for most of their income. While the federal student aid program is designed to hold borrowers accountable for repaying their debt, policy makers have sought to balance this goal with preserving the income security of those who cannot do so due to disability or old age. To that end, the Social Security offset threshold was implemented in 1998 to reflect this balance by establishing a minimum monthly benefit payment above the poverty guideline at that time. Although Social Security benefits are adjusted for increased costs of living, the effect is diminished because the offset threshold is not comparably adjusted. The result is that thousands of individuals subject to offset are left with benefits below the poverty guideline. By allowing the offset to reduce benefits below the poverty guideline, the balance of policy goals the Congress sought when enacting the Debt Collection Improvement Act of 1996 has not been maintained. Similarly, total and permanent disability discharge provisions were established to assure that those who cannot work because they are disabled are relieved from having to repay their loans. Yet, our findings indicate that Education’s program design may make it difficult for borrowers of any age with total and permanent disabilities to avail themselves of this option. Education has taken important steps to identify and conduct outreach with these borrowers, but unless a better method is developed to allow them to verify their annual income during the 3-year monitoring period, the majority of loans may be reinstated after initially being approved. In addition, clearer communication from Education about related changes in policy to borrowers subject to offset who are identified as being eligible for a TPD discharge, including the consequences of failing to apply for such a discharge, would help borrowers make better decisions about managing their debt. Lastly, unless Education takes steps to inform borrowers facing financial hardship that they may be eligible for relief, those with little or no discretionary income may continue to have their Social Security benefits reduced. Improving outreach for this option while establishing an annual review process for those who receive it will help ensure that only eligible borrowers are exempted from offset on an ongoing basis. In establishing such an annual review process, Education may identify lessons learned from addressing challenges with such requirements in the TPD discharge process in order to streamline information needed from borrowers. Matter for Congressional Consideration To preserve the balance between the importance of repaying federal student loan debt and protecting a minimum level of Social Security benefits put in place by the Debt Collection Improvement Act of 1996, Congress should consider modifying Social Security administrative offset provisions, such as by authorizing the Department of the Treasury to annually index the amount of Social Security benefits exempted from administrative offset to reflect changes in the cost of living over time. Recommendations for Executive Action We are making five recommendations to the Secretary of Education. To improve program design for Social Security offsets and related relief options, we recommend the following actions: Inform affected borrowers of the suspension of offset and potential consequences if the borrower does not take action to apply for a TPD discharge. Such information could include notification that interest continues to accrue and that offsets may resume once their disability benefits are converted to retirement benefits. Revise forms sent to borrowers already approved for a TPD discharge to clearly and prominently state that failure to provide annual income verification documentation during the 3-year monitoring period will result in loan reinstatement; Evaluate the feasibility and benefits of implementing an automated income verification process, including determining whether the agency has the necessary legal authority to implement such a process; Inform borrowers about the financial hardship exemption option and application process on the agency’s website, as well as the notice of offset sent to borrowers; and Implement an annual review process to ensure that only eligible borrowers are exempted from offset for financial hardship on an ongoing basis. Agency Comments and Our Evaluation We provided a draft of this product to the Department of Education, the Social Security Administration, and the Department of Treasury for review and comment. Treasury provided technical comments only, which we have incorporated where appropriate. SSA and Education generally agreed with the findings, conclusions, and recommendations of this report, and provided written comments that are reproduced in appendixes V and VI. Education also provided technical comments, which we have incorporated where appropriate. In its written comments, Education noted that there are a growing number of older Americans with student loan debt and that this population experiences higher rates of default. Moreover, Education recognized the importance of improving and streamlining communications with these borrowers. To better inform borrowers whose offsets are suspended without applying for a TPD discharge, Education said it will implement a process to notify borrowers about the suspension. In addition, Education said it will take steps to improve the clarity of the TPD discharge forms and determine if the department has the legal authority and operational capability to automate the income verification process during the 3-year monitoring period. We agree that Education should take these steps to better inform borrowers and streamline the income verification process. We also appreciate Education’s willingness to review the TPD discharge forms. In doing so it is important to ensure that the forms clearly and prominently state that unless the income verification form is completed and returned—even if a borrower has no income to report—their loans will be reinstated. Regarding the financial hardship exemption option, Education generally agreed with both recommendations and said it will take steps to inform borrowers about the option and application process. Education said it plans to revise its website to include such information but noted that the notice of offset is sent by Treasury. While Treasury sends a notice of offset to borrowers, Education sends borrowers a separate notice of offset that provides details on the loans eligible for offset and informs borrowers of their options to avoid offset. In addition to the options already described on Education’s form, including the option for borrowers to apply for a financial hardship exemption from Social Security offset would provide borrowers more complete information. We appreciate Education’s willingness to coordinate with Treasury to improve communication with borrowers about the availability of this relief option, but we continue to believe that Education should provide comprehensive information on relief options in the form Education sends borrowers. Education also noted that it plans to automate the process for tracking financial hardship exemptions, which will allow for regular reviews to ensure that only eligible borrowers are exempted. However, Education said that due to funding limitations the implementation of these improvements has not been scheduled. We encourage Education to take the necessary steps to implement an annual review process. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Education, the Secretary of the Treasury, and the Acting Commissioner of the Social Security Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives for this review were to examine: (1) characteristics of student loan debt held by older borrowers subject to offset and the effect on their Social Security benefit, (2) the amount of debt collected by the Department of Education (Education) through offsets and the typical outcomes for older borrowers; and (3) effects on older borrowers as a result of program design for offsets and related relief options. To answer our research objectives, we analyzed administrative data from Education, the Department of the Treasury’s (Treasury) Bureau of the Fiscal Service (Fiscal Service), and the Social Security Administration (SSA). Specifically, we obtained and linked administrative data from Fiscal Service’s Treasury Offset Program to Education’s National Student Loan Data System (NSLDS) and SSA’s Master Beneficiary Record (MBR) and Disability Control File (DCF) in order to examine the student loan history and outcomes for older Americans with Social Security offsets. We also examined aggregated data provided by Education on the total number of borrowers in default and offset by age and the total outstanding federal student loan balance by age. In addition, we reviewed relevant federal laws, regulations, and documentation. We also interviewed agency officials to obtain information about offsets of Social Security benefits, as well as Education’s processes for discharging student loan debt in cases of disability and claiming an exemption or reduction from offset due to financial hardship. We obtained documentation from agency officials, including forms sent to borrowers during the application and approval process for both Total and Permanent Disability (TPD) discharges and financial hardship exemptions. To further examine the TPD discharge process, we analyzed aggregated data provided by Education’s TPD servicer on TPD discharge applications, approvals, and reinstatements, including the total volume and dollar value. To identify the amount Education collected on defaulted student loans through offsets and other payment mechanisms, we analyzed data provided by Education’s Default Resolution Group, including aggregated data from the Debt Management and Collection System and information reported by guaranty agencies. We also analyzed aggregated data provided by Fiscal Service on fees assessed by the Treasury Offset Program by type of offset for Education and other federal agencies. Finally, we analyzed published data from the 2013 wave of the Federal Reserve Board’s Survey of Consumer Finances (SCF) to update figures which we previously reported on the overall share of households with student loan debt with the most recent data available. This updated data is available in appendix III. The SCF gathers various economic and financial data at the household level, such as student loan, mortgage, and credit card debt, and is conducted once every 3 years. Because survey responses are based on the financial situation of an entire household, not just the head of household, it is possible that the reported student loan debt for some households headed by older Americans is held by children or other dependents that are still members of the household. Treasury Offset Program data The Treasury Offset Program carries out transactions for offsetting federal payments for delinquent nontax debt. We obtained record-level data for 480,097 individuals who were subject to offset of Social Security benefit payments due to a defaulted debt owed to Education from fiscal year 2001 to 2015, including information on the amount and date of offsets. NSLDS is Education’s central database for information on federal financial aid for higher education, including student loans. We obtained record-level data on borrowers’ loan histories from NSLDS on 477,867 individuals identified from the Treasury Offset Program data as having been subject to offset of Social Security benefit payments due to defaulted federal student loans at some time from fiscal year 2001 to 2015. A small proportion of the individuals—about 0.46 percent—who were identified through the Treasury Offset Program data did not match to student loan records in NSLDS. According to Education officials, debt owed to Education can also result from overpayments of federal education grants such as Pell grants, and grant recipients who do not repay this debt timely are submitted to Treasury for offset. We examined a variety of information from NSLDS for borrowers with Social Security benefit offsets, including loan disbursement dates, default dates, loan statuses, and loan balances over time. Analyses that used NSLDS data on loan balances were restricted to fiscal year 2006 forward because NSLDS did not consistently retain detailed loan balance history prior to 2006. Analyses of borrower outcomes over time were restricted to borrowers who entered offset early enough that sufficient time had elapsed to observe the outcomes over the entire length of time, as noted in the report. SSA’s MBR contains administrative records of Social Security beneficiaries. We obtained record-level data on the ages of borrowers certified for offset for Education debt from fiscal year 2001 to 2015, as well as the types and amounts of benefits they received. SSA’s DCF contains additional administrative data on Social Security Disability Insurance beneficiaries. From the DCF, we obtained information on the continuing disability review category (e.g., the determination that medical improvement is not expected) for the population of borrowers identified from the Treasury Offset Program data. Data Reliability We conducted a data reliability assessment of the summary and record- level administrative data we used by reviewing documentation on the datasets, requesting and reviewing the queries used to generate the data extracts, and interviewing officials about how the data are collected and their appropriate uses. When possible, we compared our summary-level data to publicly reported information to ensure completeness and accuracy. Additionally, we performed electronic testing of the record-level linked administrative data to identify missing or unreliable data and to resolve discrepancies across the separate data sources. In particular, Date of birth: NSLDS was missing date of birth information for some borrowers. To ensure we had a reliable date of birth variable, we also obtained date of birth information in the MBR data. We compared these two sources to identify and resolve discrepancies, including using dates of loan disbursement to ensure that date of birth was consistent with plausible borrower ages at time of loan disbursement. Consolidation loans: In determining the length of time borrowers held loans prior to offset, we matched consolidation loans to the underlying closed loans that were paid off via consolidation and included information on these loans. We excluded all other loans that were paid off prior to offset. Application of offset payments: NSLDS did not always contain complete information on the application of offset payments across loan principal, interest on the outstanding balance, and program fees due to data reporting issues. We measured the total amount of offset payments and the number of individual offset transactions per borrower using Treasury Offset Program data. We only included a borrower in our analysis if we were able to identify the application of the offset payment for a majority of the borrower’s individual offset transactions. Death discharges: Loan discharges due to death may not occur immediately after a borrower’s death because Education requires documentation of the death. When we identified a borrower as deceased through the MBR data, we considered the borrower to have received a death discharge as of the month of the borrower’s death, regardless of whether a death discharge was indicated in the NSLDS data. Through our testing, we identified some data elements that were not sufficiently reliable, and we did not use these data elements in our analysis. Additionally, in certain analyses we were unable to calculate measures for a small proportion of borrowers, and we note these limitations in the report. We excluded 13 borrowers for whom no date of birth could be determined from all analyses, and we excluded 33 borrowers for whom MBR data was unavailable from analyses that required MBR data elements. For purposes of our analysis, we found the data elements we ultimately reported on to be sufficiently reliable based on this assessment. Appendix II: Additional Data Analysis of Student Loan Debt for Older Americans Trends in Federal Student Loan Debt, Default, and Offset The total amount of federal student loan debt held by borrowers age 50 and older is considerably less than for younger age groups. Figure 12 shows the total dollar amount of federal student loan debt held by borrowers of all ages and older Americans in particular, since fiscal year 2005. While fewer older Americans hold student loan debt, the rate of increase in the number of older borrowers and the amount of their debt far outpaced younger borrowers (see fig. 13). As the number of federal student loan borrowers and the amount of debt have increased over time, so have the number of borrowers who are in default and subject to an offset of any federal payment. Data from Education show that, from fiscal years 2005 to 2015, the total number of borrowers of all ages in default increased from about 4.3 million to 8.6 million. The corresponding increase in the number of borrowers subject to any offset was from about 387,000 to slightly more than 1 million. Over the same time period, borrowers were increasingly likely to default on their loans and become subject to offset. The share of borrowers of all ages in default increased from about 15 percent in default in fiscal year 2005 to 19 percent in fiscal year 2015 while the share of borrowers in offset increased from 1.4 percent in fiscal year 2005 to 2.3 percent in fiscal year 2015. The number of borrowers, especially older borrowers, who have experienced offsets of Social Security benefits to repay defaulted federal student loans has increased over time. As shown in figure 14, there are more borrowers subject to Social Security offsets who received disability benefits compared to retirement or survivor benefits. Once disabled beneficiaries reach their full retirement age (currently age 66 for people born in 1943-1954), disability benefits are automatically converted to retirement benefits. Thus, for defaulted borrowers age 65 and older, the vast majority—95 percent—received retirement or survivor benefits. Older Borrowers with Student and Parent PLUS Loans The majority of older borrowers hold loans taken out for their own education rather than for their children’s education. According to data from Education, about 31 percent of borrowers age 50 to 64 had loans taken out for their children’s education in fiscal year 2015. Among borrowers age 65 and older, this figure was less, about 24 percent. Instances of default are substantially lower for older Americans with loans for their children’s education compared to student loans for their own education. In fiscal year 2015, the share of borrowers age 50 to 64 with Parent PLUS loans in default was 10 percent compared to 35 percent for those with student loans. Likewise, the share of borrowers age 50 to 64 with Parent PLUS loans in offset was about 1 percent compared to 3 percent for those with student loans. Length of Time in Default Prior to Offset Many older borrowers who held student loans for an extended period of time had also been in default for a decade or more before becoming subject to Social Security offset. About 68 percent of borrowers who eventually became subject to Social Security offset were not yet receiving Social Security retirement, disability or survivor benefits when they defaulted on their student loans. Among these borrowers, most had been in default for a decade or more before becoming subject to Social Security offset, and just over 20 percent of these borrowers had been in default for 20 or more years at the time they became subject to offset. Other older Americans who were already receiving Social Security benefits at the time of default became subject to offset within shorter timeframes. About 71 percent of borrowers who were already receiving Social Security benefits when they defaulted on their student loans became subject to offset within 3 years after their default. We previously reported that under Education’s annual process for certifying defaulted debt for offset, Education sends the defaulted debt to Fiscal Service between 17 and 29 months after the last payment on the debt. Not all defaulted borrowers receiving Social Security benefits are immediately subject to offset: For example, borrowers whose benefit amount is below $750 per month are not subject to offset but may later become subject to offset if cost of living adjustments increase their benefits above the $750 threshold. About 20 percent of older borrowers who were receiving Social Security benefits at the time of their default became subject to offset 5 or more years after their default. Student Loan Balances of Older Borrowers with Social Security Offsets Compared to Other Older Borrowers Older borrowers who became subject to Social Security offsets tended to have relatively lower outstanding student loan balances compared to other older borrowers who were not in default. As shown in table 3, the average amount of debt held by these older borrowers with Social Security offsets was less than the average for older borrowers who were not in default. Looking only at borrowers in default, older borrowers who became subject to Social Security offsets owed more than the average for all defaulted borrowers of their age. Appendix III: Supplemental Data Analysis Tables for Older Americans with Student Loan Debt 75 and older Total, all ages *Ten or fewer observations. Fiscal Year Discharged – Total and Permanent Disability (TPD) Loans open - total Rehabilitated or consolidated – total 14,308 . . . . 67,369 . . . . 78,464 . . . . 26,715 . . . . . . . Total collections $175,923,528 1 year or more 45.07 30,653 2015 Total distinct borrowers, any year 36,471 64,197 . . . . . . . . . . . . Appendix IV: Copy of Education’s Total and Permanent Disability Servicer’s Form for Annual Income Verification Appendix V: Comments from the Department of Education Appendix VI: Comments from the Social Security Administration Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michael Collins (Assistant Director), Sharon Hermes (Analyst in Charge), Christopher Zbrozek, and John Mingus made key contributions to this report. Also contributing to this report were Deborah Bland, Ben Bolitzer, Helen Desaulniers, Charles Jeszeck, Theresa Lo, Ying Long, Sheila McCoy, Kevin Metcalfe, Mimi Nguyen, Dae Park, Kenneth Rupar, Kathleen van Gelder, Walter Vance, and Adam Wendel.
An increasing number of older Americans have defaulted on their federal student loans, which are administered by Education, and have a portion of their Social Security retirement or disability benefits withheld above a minimum benefit threshold to repay this debt. Given that Social Security is the primary source of income for many older Americans, GAO was asked to review these withholdings, known as offsets. GAO examined: (1) characteristics of student loan debt held by older borrowers subject to offset and the effect on their Social Security benefit; (2) the amount of debt collected by Education through offsets and the typical outcomes for older borrowers; and (3) effects on older borrowers resulting from the program design of relief options. GAO examined data from fiscal years 2001 through 2015 from Education's National Student Loan Data System and other administrative data from Treasury and SSA. GAO also examined aggregated data provided by Education and Treasury, reviewed documentation, and interviewed agency officials about Education's processes for providing relief from offset. Older borrowers (age 50 and older) who default on federal student loans and must repay that debt with a portion of their Social Security benefits often have held their loans for decades and had about 15 percent of their benefit payment withheld. This withholding is called an offset. GAO's analysis of characteristics of student loan debt using data from the Departments of Education (Education), Treasury, and the Social Security Administration (SSA) from fiscal years 2001-2015 showed that for older borrowers subject to offset for the first time, about 43 percent had held their student loans for 20 years or more. In addition, three-quarters of these older borrowers had taken loans only for their own education, and most owed less than $10,000 at the time of their initial offset. Older borrowers had a typical monthly offset that was slightly more than $140, and almost half of them were subject to the maximum possible reduction, equivalent to 15 percent of their Social Security benefit. In fiscal year 2015, more than half of the almost 114,000 older borrowers who had such offsets were receiving Social Security disability benefits rather than Social Security retirement income. In fiscal year 2015, Education collected about $4.5 billion on defaulted student loan debt, of which about $171 million—less than 10 percent—was collected through Social Security offsets. More than one-third of older borrowers remained in default 5 years after becoming subject to offset, and some saw their loan balances increase over time despite offsets. However, nearly one-third of older borrowers were able to pay off their loans or cancel their debt by obtaining relief through a process known as a total and permanent disability (TPD) discharge, which is available to borrowers with a disability that is not expected to improve. GAO identified a number of effects on older borrowers resulting from the design of the offset program and associated options for relief from offset. First, older borrowers subject to offsets increasingly receive benefits below the federal poverty guideline. Specifically, many older borrowers subject to offset have their Social Security benefits reduced below the federal poverty guideline because the threshold to protect benefits—implemented by regulation in 1998—is not adjusted for costs of living (see figure below). In addition, borrowers who have a total and permanent disability may be eligible for a TPD discharge, but they must comply with annual documentation requirements that are not clearly and prominently stated. If annual documentation to verify income is not submitted, a loan initially approved for a TPD discharge can be reinstated and offsets resume.
Background Global emissions of greenhouse gases, such as COproduction of some crops and livestock productivity, and a decrease in the availability of fresh water in certain parts of the world. electricity, often referred to as the energy penalty or parasitic power, is required for capture and compression. Oxyfuel combustion: Oxyfuel combustion, which is in its developmental stages, is a technology that is being developed for using coal to generate electricity that could reduce CO emissions. According to DOE, oxyfuel combustion could be applied to existing pulverized coal-fired plants. Oxyfuel combustion burns coal using pure oxygen diluted with recycled CO and water vapor, with some excess oxygen, facilitating the capture of the CO is not necessary under this approach, the CO. However, depending on the level of excess oxygen and other trace components, some additional gas cleanup may be required to make the CO would be transported, likely via pipeline, to a storage site and injected at depths of over 800 meters (or about 2,600 feet) into underground geologic formations (such as depleted oil reservoirs and saline formations), thought to be conducive for isolating the CO leakage and the ownership of CO must be monitored to ensure it does not escape into the environment. On February 27, 2003, the President announced FutureGen as a cost- shared project between DOE and industry to create the world’s first coal- fired, zero emissions electricity and hydrogen production power plant. The production of hydrogen was to support the President’s Hydrogen Fuel Initiative to create a hydrogen economy for transportation. The original FutureGen plant was planned to operate at a commercial scale as a 275 megawatt IGCC facility that would capture and store at least 1 million metric tons of CO Pursuant to the agreement, the Alliance was to design, construct, and operate the FutureGen plant, and DOE was to provide project oversight, conduct the environmental analyses required by NEPA, and coordinate the participation of foreign governments. The project was to run through November 2017 and operate as the cleanest fossil fuel-fired power plant in the world. After completion of the formal project, the FutureGen plant was expected to continue operating for the typical lifespan of a power plant—usually 30 to 50 years—generating electricity and providing a platform for energy research. On January 30, 2008, DOE announced that it had decided to take the FutureGen program in a different direction. DOE stated that it would demonstrate CCS at multiple commercial-scale power plants, including retaining the integration of CCS and IGCC. DOE referred to this new approach as the restructured FutureGen program. In June 2008, in a funding announcement for the restructured program, DOE stated that it expected it would have about $290 million available through fiscal year 2009 for its share of funding for the program. (See app. II for an overview of DOE budget authority and obligations for FutureGen.) The Goals of the Original and Restructured FutureGen Programs Are Largely Similar, but the Programs’ Different Approaches May Lead to Different Results The overall goals of the original and restructured FutureGen programs are largely similar in that both programs seek to produce electricity from coal with near-zero emissions by using CCS, and to make that process economically viable for the electric power industry. However, the programs outline different approaches for achieving their goals, which could affect the commercial advancement of CCS differently in several ways. With a Few Key Exceptions, the Goals of the Original and Restructured Programs Are Largely Similar Both the original and restructured programs aim to establish the feasibility and economic viability of producing electricity from coal with near-zero emissions by employing CCS. The programs’ goals for storing CO and limiting other emissions, such as mercury and sulfur, are also similar; except that the requirement for the amount of carbon to be captured has been reduced from 90 percent in the original program to 81 percent in the restructured program (see table 1). Knowledgeable stakeholders told us that this decrease in carbon capture is of modest significance and that a goal of 81 percent is still very ambitious and costly. DOE received similar feedback in responses to its request for information from the public about its plan to restructure FutureGen. Eighteen of the 49 respondents indicated that the 90 percent goal would be too restrictive for industry participants because of the additional energy required to capture and compress CO However, the restructured program, a DOE commercial demonstration project, seeks to accelerate the commercial deployment of CCS (that is, generating and selling electricity to earn profits) by implementing CCS at one or more commercial facilities by 2015— approximately five years earlier than the original program’s commercial operations could begin. The original program, a DOE research and development project, would begin generating electricity in 2012, a few years earlier than the restructured FutureGen; but, it could not begin operating as a profit-seeking commercial facility until after the nonprofit Alliance sells it, which is currently anticipated to occur in 2020. Knowledgeable stakeholders told us that the restructured program’s time line for the commercial deployment of its project(s) might be ambitious because legal and environmental issues related to siting and permitting, in particular for CCS, could slow implementation. They also stated that the required NEPA analyses, which must be completed prior to beginning construction, could take up to 3 years. In contrast, DOE had completed its NEPA analyses for the original FutureGen. Moreover, the governments of the two states—Texas and Illinois—where the four finalist sites for the original FutureGen were located, had agreed to assume liability for the injected CO. DOE officials told us that, unlike the original program, a primary goal of the restructured FutureGen was to facilitate the siting and permitting process for CCS by implementing multiple projects in different locations. The Different Approaches for Achieving Goals Could Have Different Impacts on the Commercial Advancement of CCS Because of the different approaches for achieving their goals, the original and restructured FutureGen programs could have different impacts on the commercial advancement of CCS (see table 2). The type of information gained from the programs may vary. First, the original program would have developed knowledge about CCS at IGCC plants, while the restructured program could allow for opportunities to learn about CCS at both IGCC and other types of coal plants. Knowledgeable stakeholders whom we contacted stated that DOE could benefit by taking advantage of the opportunity under the restructured FutureGen program to learn about CCS at multiple types of plants. They explained that opportunities to learn from multiple plant sites in different regions with various technologies would provide a wide range of knowledge about the implementation of CCS in various contexts. Similarly, 30 of 49 respondents to DOE’s request for information about the restructured program indicated that it would be beneficial if the restructured program were to include both IGCC and other types of coal plants. In addition to other organizations, such as the National Academy of Sciences, we have noted that the benefits of learning about CCS technologies are also applicable to existing pulverized coal-fired plants, since they account for an overwhelming share (about 99 percent) of the world’s coal-fired power plants. However, one of the intended benefits of the restructured program—providing opportunities to learn from multiple plants about various technologies—may not be fully realized since DOE received only a small number of applications. If an application for IGCC has not been received or is not selected, the loss of an IGCC plant with integrated CCS capability is significant because, according to the draft strategic planning document for the restructured program, demonstrating this technology is a key solution for reducing atmospheric CO emissions from coal-fired power plants. Comments submitted to DOE and knowledgeable stakeholders we interviewed indicated that the carbon capture goal for the restructured program was too restrictive for commercial facilities. One stakeholder stated that the restructured program goals might be overly optimistic about what commercial projects are willing to do. As a result of receiving only a small number of applications, the restructured program is not as likely to develop as broad a base of knowledge as it could have if more applications were received. Second, it is unclear whether the original FutureGen program or the restructured program could have advanced the broader roll out of CCS more quickly across all of industry. According to DOE documents, the restructured program is intended to begin deploying CCS at one or more commercial facilities in 2015, approximately five years earlier than the original program’s commercial operations (that is, generating and selling electricity) could begin. The original program, a DOE research and development project, would have begun generating electricity in 2012, a few years earlier than the restructured FutureGen, but it could not have begun operating as a profit-seeking commercial facility until after the nonprofit Alliance sold it, which was anticipated to occur in 2020. Moreover, unlike the restructured program, the original FutureGen would have included a wide variety of industry partners (including foreign government partners, which are absent from the restructured program). In addition, more industry partners could have joined the Alliance and its 11 members over the course of the original program. As a result of its wider participation, the original FutureGen could potentially have advanced the broader roll out of CCS across all of industry and internationally, instead of at only a few commercial facilities, more quickly than the restructured program. DOE officials told us that the original program would likely improve the global advancement of CCS more quickly than the restructured program due to its various international partnerships. They stated that DOE is developing an approach to recoup the loss of international involvement that resulted from restructuring FutureGen. Third, the restructured program will not serve as a living laboratory host facility for technologies emerging from energy research and development programs aimed at the goal of near-zero emissions and for gaining broad industry acceptance for these technologies. The original FutureGen plant was to be designed with the ability to test various technologies that are scalable to full size, such as fuel cells, advanced gasification, and membrane air separation systems. Without the opportunity to test these emerging research and development technologies, the restructured FutureGen might result in a slower advancement of CCS than the original program may have yielded. According to the cooperative agreement between DOE and the Alliance, emerging technologies, such as fuel cells, could have been tested at the original program’s living laboratory host facility. In a September 2007 presentation to DOE’s Deputy Secretary, NETL noted the impact of removing the living laboratory, saying it would “significantly delay the availability of the technology for commercial deployment” and have a “significant programmatic impact.” DOE officials told us that they have not yet determined where these technologies will be tested. The Restructured FutureGen Differs from Most of the Other DOE Carbon Capture and Storage Programs, but It Is Similar to CCPI in Several Ways DOE manages a portfolio of clean coal programs that research and develop CCS technology or demonstrate its application. Focusing on commercial coal-fired power plants, the restructured FutureGen would integrate key components of CCS, such as CO at the storage location. However, the restructured FutureGen is similar in some ways to Round III of CCPI, but CCPI’s goals are more modest than those of the restructured FutureGen and, hence, may be more achievable for industry partners. The other CCS programs include the (1) Regional Carbon Sequestration Partnerships, (2) Innovations for Existing Plants Program, (3) Advanced Turbines Program, (4) Advanced Integrated Gasification Combined Cycle Program, and (5) Round III of the Title 17 Incentives for Innovative Technologies Loan Guarantee Program (Loan Guarantee Program). Four of these five CCS programs do not integrate all key components of CCS and concentrate on developing one or two related components of CCS, such as CO storage, or CO Round III of CCPI seeks to demonstrate, at a commercial scale, advanced coal-based technologies that capture and store carbon, or put CO emissions to beneficial reuse, such as to enhance oil recovery. The proposals for Round III of CCPI were due to DOE by January 15, 2009, and DOE expects to announce its selections in July 2009. , is added to the reservoir after secondary recovery in order to increase production. The purpose of EOR is to increase oil production, primarily through an increase in temperature, pressure, or an enhancement of the oil’s ability to flow through the reservoir. may be more realistic or attainable for commercial partners, more proposals may be submitted to CCPI than the restructured FutureGen. For example, two officials from electric utility companies said that, despite the potentially greater amount of funding available through the restructured FutureGen ($1.3 billion, subject to future appropriations) than CCPI ($440 million, subject to future appropriations), their companies would apply for CCPI over the restructured program because they could meet CCPI’s goals. The Restructured FutureGen Differs from Most Other DOE CCS Programs from coal-fired power plants, including possibly capturing 500,000 metric tons of CO storage in deep oil-, gas-, coal-, and saline-bearing formations. Phase II, the Validation Phase (2005-2009), is implementing 25 small-scale geologic storage tests. Phase III, the Deployment Phase (2008-2017), is a continuation of the Phase II small-scale tests, but at a much larger scale. other sources, such as ethanol production plants. The injection of CO capture and compression technologies to assist existing coal-fired power plants. Through this program, DOE is providing $36 million in funding for 15 projects to develop new and cost-effective CO capture: membranes, solvents, sorbents, oxyfuel combustion, and chemical looping. The Advanced Turbines Program focuses on creating the technology base for turbines that will permit the design of IGCC plants with CCS that can operate at near-zero emissions, thereby facilitating CO capture. The program aims to develop advanced gasification technologies to enable CO separation, capture, and sequestration into near-zero atmospheric emissions configurations that can, ultimately, provide electricity with less than a 10 percent increase in cost. Finally, Round III of the Title 17 Incentives for Innovative Technologies Loan Guarantee Program will provide up to $8 billion in loan guarantees for energy projects that satisfy three criteria: avoid, reduce, or sequester air pollutants or greenhouse gases; employ new or significantly improved technologies, compared with commercial technologies in service at the time the guarantee is issued; and provide a reasonable prospect of repayment. Initial applications for Round III of the program were due to DOE in December 2008. We recently reported on DOE’s progress in (1) issuing final regulations to govern this program, (2) taking actions to help ensure that the program is managed effectively and to maintain accountability, and (3) determining whether there were inherent risks due to the nature and characteristics of this program that may affect DOE’s ability to make the program pay for itself and support a broad spectrum of innovative energy technologies. Table 3 summarizes the comparison of DOE programs supporting CCS. DOE Did Not Support Its Decision to Restructure FutureGen with Sufficient Information on Costs, Benefits, or Risks According to our recent work and best practices, a decision to terminate or significantly restructure an ongoing program should typically be informed by timely and sufficient information on the costs, benefits, and risks of such a decision. While DOE had reason to be concerned about the escalating costs of the original FutureGen, it made its decision to cancel that program and replace it with the restructured FutureGen based, in large part, on a comparison of cost estimates that were not comparable. That is, it compared one estimate that was in current dollars with one that was in constant dollars. In restructuring FutureGen, DOE did not sufficiently analyze the costs, benefits, and risks of canceling the original FutureGen and replacing it with a significantly restructured program. A comprehensive analysis could have helped DOE determine how the costs, benefits, and risks of the restructured FutureGen compared with those of the original FutureGen. Because it did not conduct such an analysis, DOE cannot be assured that the restructured program is the best option to accelerate the widespread commercial advancement of CCS more quickly than the original program. Other options, rather than dramatically restructuring the program, were possible that could have preserved some of the benefits of the original program, including ensuring the integration of IGCC and CCS at the FutureGen facility. For example, FE identified and analyzed 13 other options for incremental, cost-saving changes to the original program, such as reducing the COvary from location to location, and technology costs and designs, such as for turbines, vary depending on the specific manufacturer and vendor. The March 2007 renewed cooperative agreement listed approximately $1.8 billion as the current estimated cost of the project. However, senior DOE officials soon began to express concerns about escalating program costs, and they directed FE officials to develop recommendations for controlling costs. In September 2007, FE officials presented several recommendations for incremental changes to control costs to the Deputy Secretary of Energy; they also noted various measures already in place for controlling costs, such as monthly progress reports and a risk management program. Importantly, none of the recommendations indicated that DOE should cancel the original program and restructure FutureGen; moreover, FE officials told us that they did not prepare any analysis or recommendations for senior DOE officials that resembled what was to become the restructured program. According to DOE, following this presentation, senior DOE officials directed FE to negotiate with the Alliance new cost sharing arrangements under the cooperative agreement, which was scheduled for continuation in June 2008. The Alliance agreed to meet to renegotiate the terms of the cooperative agreement. Over the course of several meetings, the parties discussed various funding scenarios and exchanged proposed term sheets. Subsequently, however, the Alliance and DOE did not reach agreement. In December 2007, the Alliance sent a letter to DOE stating that it preferred to proceed under the existing cooperative agreement until FutureGen’s costs and risks could be assessed with input from the preliminary design report and cost estimate that were due by June 2008. Also in December 2007, the Secretary of Energy briefed senior presidential advisers that the estimated cost of FutureGen had nearly doubled—from $950 million to $1.8 billion—and that costs were expected to continue rising. In addition, according to the briefing documents, DOE planned to end its partnership with the Alliance and was developing a new strategy for FutureGen that would cap the government’s financial exposure. The briefing documents explained that DOE’s new approach for FutureGen would fund only the CCS-related technology associated with multiple commercial IGCC plants, rather than the entire construction of a single plant with CCS. Around this time, according to DOE officials, senior DOE officials directed FE to develop the restructured FutureGen program. In response, these officials told us, many high-level offices within DOE collaborated on developing a draft strategic planning document for the restructured program. According to these officials, the draft strategic planning document that they finalized in January 2008 was the first complete document about the restructured FutureGen. On January 30, 2008, DOE publicly announced that it was restructuring FutureGen to provide a ceiling on federal contributions and that the restructured program was a more cost-effective approach. On this same day, DOE notified the Alliance that it was restructuring FutureGen and would not continue its cooperative agreement with the Alliance. DOE informed the Alliance that it was restructuring FutureGen in response to serious concerns over substantial escalation in projected costs, including what the agency concluded would be the likely continued escalation of the costs. DOE officials also stated that they disapproved of the Alliance’s decision to announce the selection of a project site before DOE issued its NEPA Record of Decision. According to DOE, prior to the site selection announcement and without knowledge of the Alliance’s choice of site, DOE had asked the Alliance not to go forward with the announcement and further advised the Alliance against making an announcement until the Record of Decision had been issued. DOE officials also said that, in their negotiations on measures that could limit DOE’s financial exposure, they lost confidence in the ability of the Alliance to fund its share of the project cost. Although comparing cost estimates can provide valuable insight about the impact of escalating costs on a project, DOE based its decision to restructure FutureGen, in large part, on a comparison of cost estimates that were not actually comparable. That is, in 2004, DOE had estimated that the cost of the original FutureGen would be $950 million in constant 2004 dollars. In contrast, the Alliance’s 2007 estimate of about $1.8 billion was in current dollars, which reflected inflation over the course of the program from 2005 through 2017. In explaining his decision to restructure FutureGen to senior presidential advisers, the Secretary of Energy indicated that the projected program costs had “nearly doubled,” from $950 million to $1.8 billion. However, comparing constant dollars, which exclude inflation, with current dollars, which reflect inflation, is misleading. Our calculations show that the Alliance’s current dollar estimate of roughly $1.8 billion is equivalent to approximately $1.3 billion in constant 2005 dollars—an increase in total program costs of about $370 million, or about 39 percent—not a near doubling of costs. In addition, the cost estimates by DOE and the Alliance were prepared early in the project and, as a result, were based on conceptual designs for FutureGen, including power plant case studies and a blanket 10 percent increase incorporated into the Alliance’s estimate to allow for the first-of- a-kind nature of some of the plant’s components and integration issues. However, neither estimate considered costs for specific types of technology or a specific location. If DOE had waited approximately 6 months for the Alliance’s technology-specific and site-specific cost estimate, due by June 2008 as part of its preliminary design report, before deciding whether to restructure the program, it would have had the benefit of more current and complete information, including the latest information on escalating costs, when making decisions about how to move forward with FutureGen. In addition, regarding FutureGen’s total cost, the March 2007 cooperative agreement stated that DOE and the Alliance recognized that many uncertainties—such as plant design, site selection, and market conditions—still existed in developing a firm cost estimate. In May 2008, the Secretary of Energy testified before Congress that FutureGen was conceived as a $950 million venture and that its estimated cost had increased to roughly $1.8 billion; however, the Secretary’s prepared statement did not indicate that the first estimate was in constant dollars, while the second was in current dollars. The Secretary also testified that DOE believed its costs would continue to escalate. We requested that DOE provide us with the analysis that supported DOE’s anticipated escalation. In October 2008, DOE officials told us that the ongoing cost escalations were unprecedented and that they had looked internally across various indexes, including the Bureau of Labor Statistics, to get a sense of prospective escalation. However, they stated that they did not have any written or comprehensive analysis. They added that they did not prepare a position paper, study, or generate any analysis examining current or future escalation for the decision to restructure FutureGen. Moreover, economic forecasting organizations, such as DOE’s Energy Information Administration, have found that significant cost escalations, such as those for building power plants over the past several years, do not typically continue in the long run. A Comprehensive Analysis Could Have Helped DOE Determine How the Costs, Benefits, and Risks of the Restructured FutureGen Compared with Those of Other Options DOE did not prepare a comprehensive analysis comparing the relative costs, benefits, and risks of the original and restructured FutureGen programs before making the decision to replace the original program with the restructured FutureGen. On two different occasions, DOE officials told us that the agency did not prepare such an analysis. These officials told us that the Secretary of Energy’s May 2008 congressional testimony included the agency’s official explanation for why it decided to restructure FutureGen. In September 2008, we asked DOE to provide us with additional information, including the agency’s official position on why it decided to restructure FutureGen, all the factors upon which DOE based the decision, the extent to which the decision was based on documented supporting analysis, and a copy of any such analysis. In January 2009, after we sent a draft of this report to DOE for review and comment, DOE responded to our request for additional information, stating that the detailed analysis supporting its decision to restructure FutureGen could be found in the draft strategic planning document for the restructured program and that this document discussed the factors considered by DOE in making the decision to restructure FutureGen. However, as previously discussed in our findings, the draft strategic planning document was not completed in time to inform the decision to restructure FutureGen. In addition, we do not consider the draft strategic planning document to be comprehensive because it did not assess: 1. whether costs for the original FutureGen would escalate substantially 2. the relative costs, benefits, and risks for all of the types of plants for which the restructured FutureGen was eligible to receive proposals, such as conventional pulverized-coal and oxyfuel combustion plants, but only contemplated proposals for IGCC plants; 3. the risk that industry respondents might not propose an IGCC plant for 4. the risk that industry respondents might not propose enough viable projects for the restructured FutureGen; 5. the costs, benefits, and risks of making incremental changes to the original FutureGen alongside the relative costs, benefits, and risks of the restructured FutureGen; and 6. any potential overlap between the restructured FutureGen and other DOE programs. A comprehensive analysis could have supported DOE’s decision making in several ways. First, it could have helped DOE assess the risk that industry respondents to DOE’s request for applications under the restructured FutureGen might not propose an IGCC plant. DOE received public comments indicating that such an outcome was possible because IGCC is not yet prevalent in the industry—only two commercial IGCC plants currently operate in the United States—and other technologies may provide better opportunities to meet the restructured program’s requirements, among other reasons. Applying CCS at existing, conventional pulverized coal-fired plants is important because those plants comprise almost all operating coal-fired plants in the United States and abroad. However, according to DOE, IGCC plants integrated with CCS are important for reducing CO emissions in the future. Both DOE’s press release announcing the restructured program and the updated draft strategic planning document, dated July 1, 2008, that DOE provided Congress indicated that the restructured program would include IGCC. The funding announcement for the restructured FutureGen highlighted the important contribution that an IGCC plant integrated with CCS would make toward the nation’s energy needs, such as providing continued fuel diversity for generating electricity and mitigating dependence on more expensive and less secure sources of energy. As late as May 2008, the Secretary of Energy indicated in congressional testimony that the restructured program would likely include IGCC, stating that advances in technology and the market, in addition to regulatory uncertainty, would provide incentives for industry to begin deploying commercial-scale IGCC plants with CCS. In addition, a comprehensive analysis could have helped DOE assess the risk that industry respondents might not propose enough viable projects from which DOE could then assess and make multiple selections. Such an analysis could also have helped DOE assess whether the new cost-share arrangement would provide sufficient incentive for enough proposals to be selective. In the draft planning documents and press release announcing the restructured program, DOE stated that it restructured FutureGen, in part, because market conditions had changed in such a way that DOE could fund multiple industry projects and accomplish even more widespread commercialization of CCS and related information sharing across the industry than what would have been accomplished by the Alliance’s consortium of 11 coal producers and electric power companies. However, DOE only received a small number of applications and some proposed projects were outside the restructured FutureGen’s scope. As a result, widespread commercialization and information sharing seem less likely than under the original program. DOE also asserted that the restructured program would hasten the time frame for full-scale commercial operation of CCS. However, even if DOE accepts all applicable applications, the restructured program could implement CCS sooner than the original program at only a few commercial sites rather than, as stated before, on a more widespread and international scale. Finally, DOE also could have used a comprehensive analysis to help compare the relative costs, benefits, and risks of the restructured FutureGen with those of making incremental and other changes to the original program in order to control or offset costs. For example, prior to the decision to restructure FutureGen, FE identified and analyzed 13 options for changes to the original program, such as reducing the COrecommended or noted that DOE should be willing to consider several options with potential savings from $30 million to $55 million each. Some of these scenarios were broached during negotiations with the Alliance in the fall of 2007. Conclusions According to DOE, electric power industry, academic, and other officials and experts, for the foreseeable future, coal, which is abundant and relatively inexpensive, will remain a significant fuel for the generation of electric power in the United States and the world. However, coal-fired power plants are also a significant source of CO and other emissions responsible for climate change. Hence, for at least the near-term, any government policies that seriously address climate change will need to have a goal of significantly reducing CO, and would accept technologies other than IGCC. The restructured FutureGen left open the possibility of successfully applying CCS technology to existing conventional, pulverized coal-fired power plants— an important goal in its own right, since those plants account for almost all of the coal-fired generating capacity in the United States and abroad. However, there are already existing programs to address CCS at existing plants, and the decision to remove the FutureGen program’s specific focus on cutting edge technology (IGCC) at new plants was not well explained. In at least two ways, DOE’s decision, which affected potentially up to $1.3 billion in federal funding, was not well considered. First, the decision was made on the basis of a flawed comparison of life-cycle costs for the original FutureGen, in that DOE compared an estimate of constant dollars to an estimate of inflated dollars. Second, the decision was not based on a systematic and comprehensive comparison of the costs, benefits, and risks of the original FutureGen versus the restructured FutureGen. An expanded analysis of the costs, benefits, and risks of the original FutureGen compared with a range of modifications to the program could have included incremental changes to the original FutureGen program that could have preserved some of its original goals and benefits while mitigating costs. Such an analysis might also have detailed the risk that DOE would receive only a small number of applications and that those applications might not include IGCC. The analysis could also have considered whether DOE’s $1.3 billion contribution for total program funding presents the best option for advancing the overall goals of CCS in both existing and future plants. Recommendations for Executive Action To help ensure that important decisions about the FutureGen program reflect an adequate knowledge of the potential costs, benefits, and risks of viable options, and to promote the attainment of the goals of the program while protecting taxpayer interests, we are making the following two recommendations to the Secretary of Energy: 1. Before implementing significant changes to FutureGen or before obligating additional funds for such purposes, the Secretary of Energy should direct DOE staff to prepare a comprehensive analysis that compares the relative costs, benefits, and risks of a range of options that includes (1) the original FutureGen program, (2) incremental changes to the original program, and (3) the restructured FutureGen program. 2. In addition, the Secretary should consider the results of the comprehensive analysis and base any decisions that would alter the original FutureGen on the most advantageous mix of costs, benefits, and risks resulting from implementing a combination of the options that have been evaluated. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of Energy for review and comment. DOE did not comment on the recommendations and conclusions of the report; however, it provided technical and clarifying comments, most of which we have incorporated, as appropriate. For example, we revised the report to reflect DOE’s comment that it had reached its decision to restructure FutureGen, based on concerns about increasing costs associated with constructing the original FutureGen project and that it had attempted to negotiate a more favorable cost sharing agreement with the Alliance. However, DOE added that it had stopped those negotiations because it believed that the Alliance would not be able to financially partner with DOE to complete the project. We also revised the report to reflect information provided by DOE about the role of IGCC in the original and restructured FutureGen efforts, the type of knowledge likely to be disseminated by the original and restructured FutureGen efforts, and budget and appropriation data for FutureGen, beginning in fiscal year 2004. DOE’s comments are reprinted in appendix III, along with our responses. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Energy, the DOE Office of the Inspector General, and interested congressional committees. This report also will be available at no charge at GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Office of Congressional Relations and our Office of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology We examined (1) the goals of the original and restructured FutureGen programs, (2) the similarities and differences between the restructured FutureGen and other Department of Energy (DOE) carbon capture and storage programs, and (3) the extent to which DOE used sufficient information to support its decision to restructure the FutureGen program. To examine the goals of the original and restructured FutureGen programs, including the results of the different approaches for meeting these goals, we reviewed relevant appropriations and agency documents, including budget justifications from fiscal years 2005 through 2009; the program plan for FutureGen that DOE submitted to Congress in 2004; the cooperative agreement between DOE and the FutureGen Industrial Alliance (Alliance), and its subsequent renewals; DOE’s draft strategic planning documents and funding announcement for the restructured program; and public responses to DOE’s request for information about the restructured FutureGen and its funding announcement. We also reviewed congressional testimony about FutureGen and related topics by officials from the Alliance; DOE, including the Secretary of Energy; and other knowledgeable stakeholders, such as academic and industry researchers. In addition, we met with and reviewed documents provided by officials and researchers from DOE, the Alliance, industry, nonprofit research organizations, and academia. In particular, we interviewed DOE officials from the Office of Fossil Energy’s (FE) National Energy Technology Laboratory (NETL) and Office of Clean Coal. Finally, we conducted semi- structured interviews with knowledgeable stakeholders from the electric power and coal industries, nonprofit research organizations, and academia, among others. During the interviews, we discussed the goals, approaches, and anticipated results of the original and restructured FutureGen programs. Our method for conducting these interviews, including how we selected the knowledgeable stakeholders, appears in the next paragraph. We conducted semi-structured interviews with 14 knowledgeable stakeholders from the electric power and coal industries, nonprofit research organizations, and academia, among others. We selected a nonprobability sample of stakeholders and stakeholder organizations using a “snowball sampling” technique, whereby each stakeholder we interviewed identified additional stakeholders and stakeholder organizations for us to contact. Specifically, we identified the first three stakeholders to interview from previous, related GAO work and a group of contributors toward key scientific assessments of climate change and clean coal technology. Massachusetts Institute of Technology, The Future of Coal: Options for a Carbon- Constrained World (Cambridge, MA, 2007) and IPCC, IPCC Special Report on Carbon Dioxide Capture and Storage (Montreal, Canada, Sept. 2005). Plan; the program plan for FutureGen that DOE submitted to Congress in 2004; DOE’s draft strategic planning documents and funding announcement for the restructured program; and relevant laws. We met with and discussed these programs with officials from NETL and FE’s Office of Clean Coal. We also conducted semi-structured interviews with knowledgeable stakeholders from the electric power and coal industries, nonprofit research organizations, and academia, among others. During these interviews, we discussed the relationship between the restructured FutureGen and DOE’s other CCS programs. Finally, we reviewed public responses to DOE’s request for information about the restructured FutureGen and DOE’s funding announcements for the restructured FutureGen and Round III of the Clean Coal Power Initiative. To examine the extent to which DOE used sufficient information to support its decision to restructure the FutureGen program, we reviewed documents from DOE and the Alliance, including cost estimates; the cooperative agreement and subsequent updates to it; letters, presentations, and proposals documenting the renegotiation of terms for the cooperative agreement; proposed incremental changes for controlling costs; and the draft strategic planning documents and funding announcement for the restructured program. We also reviewed congressional testimony about FutureGen and related topics by officials from the Alliance; DOE, including the Secretary of Energy; and other knowledgeable stakeholders, such as academic and industry researchers. We met with and discussed the information used to support the decision to restructure FutureGen with officials from NETL and FE’s Office of Clean Coal. In addition, we discussed and reviewed analyses of the costs for building coal-fired, electric power plants with officials and researchers from industry, academia, and government, including DOE’s Energy Information Administration. Moreover, we discussed these costs during semi-structured interviews with knowledgeable stakeholders from the electric power and coal industries, nonprofit research organizations, and academia, among others. We also reviewed public responses to DOE’s request for information about the restructured FutureGen and its funding announcement. Finally, we reviewed our recent work and guidance on best practices for cost estimation, program management, and programmatic decision making, as well as guidance from DOE and the Office of Management and Budget. We conducted this performance audit from June 2008 to February 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Budget Authority and Obligations for FutureGen Appendix III: Comments from the Department of Energy The following are GAO’s comments on the Department of Energy’s letter dated February 4, 2009. GAO’s Comments 1. We modified our report to address DOE’s concerns about our discussion of the 13 options for incremental changes. 2. We modified our report to add clarifying information on Round III of the Clean Coal Power Initiative. 3. We added clarifying information about the timing of the site selection announcement and the release of DOE’s NEPA Record of Decision. 4. We revised the footnote to state that, according to DOE, not less than a 50 percent nonfederal cost share will be required for all of the restructured FutureGen’s stages. 5. We have revised the report to remove the referenced discussion. 6. We made DOE’s editorial correction. 7. The report does not state or imply that the location of the site was the reason for the program’s restructuring, but rather states that DOE’s restructuring decision was based on a desire to contain costs in a time of increasing cost pressures. However, we revised the report to clarify that the Alliance announced its site selection decision before DOE’s Record of Decision was released—which has not happened, as of the date of this report. 8. DOE clarifies that under the restructured program, IGCC plants are expected to have a nominal capacity of 300 megawatts, but non-IGCC projects need only be at a scale sufficient to prove commercial viability and be designed to produce and capture 1 million tons of CO per year. We revised the report to reflect this information. 9. As suggested by DOE, we revised table 2 to reflect that the original FutureGen program focused exclusively on IGCC. 10. As indicated in our response to comment 5, we have revised the report to remove the referenced text. 11. We made appropriate revisions to the report to reflect that Round III of the Clean Coal Power Initiative focuses on carbon sequestration. 12. We revised table note “d” to table 3, to state that DOE’s cost sharing is generally capped at 80 percent. 13. We revised the report to clarify that senior DOE officials directed FE to negotiate new cost sharing arrangements under the cooperative agreement with the Alliance. 14. We revised the report to use the word “continue” in place of “renew” wherever it would more accurately reflect the various stages of the cooperative agreement. 15. Our draft report did not insinuate or state that DOE did not favor the Mattoon, Illinois, site or that DOE’s restructuring decision was based on a disapproval of the Alliance’s site selection announcement. Rather, our report states that DOE’s decision was based on a desire to limit its exposure to increased costs. However, as suggested by DOE, we clarified the report by adding that DOE had instructed the Alliance to not announce the site selection before DOE could release the Record of Decision. 16. We agree with DOE and revised the report to clarify that DOE lost confidence in the ability of the Alliance to fund its share of the project cost, rather than that DOE lost confidence in the Alliance members or their representatives. 17. We revised the report to clarify that both DOE’s press release announcing the restructured program and the updated draft strategic document, dated July 1, 2008, that DOE provided Congress indicated that the restructured program would include IGCC. 18. DOE provided information regarding the official basis for restructuring the FutureGen program and its budget authority, obligations, and expenditures that we incorporated into our report, including table 4 and its table notes. We also included in the report an additional assessment of documents, to which DOE referred as providing the basis for its decision to restructure FutureGen. 19. Regarding the tables in appendix II of the draft report, DOE provided updated FutureGen appropriations information for fiscal years 2004 and 2007 and certain calculations for the fiscal year 2006 FutureGen budget. In response, we merged tables 4 and 5 from the draft to create one table in the final report, and we adjusted the figures and calculations for the data that DOE provided. Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Ernie Hazera (Assistant Director), Nancy Crothers, Cindy Gilbert, Chad M. Gorman, Angela Miles, Karen Richey, Michael Sagalow, and Jeanette M. Soares made key contributions to this report. Harold Brumm, Jr., and Timothy Persons also made important contributions.
Coal-fired power plants generate about one-half of the nation's electricity and about one-third of its carbon dioxide (CO2) emissions, which contribute to climate change. In 2003, the Department of Energy (DOE) initiated FutureGen--a commercial-scale, coal-fired power plant to incorporate integrated gasification combined cycle (IGCC), an advanced generating technology, with carbon capture and storage (CCS). The plant was to capture and store underground about 90 percent of its CO2 emissions. DOE's cost share was 74 percent, and industry partners agreed to fund the rest. Concerned about escalating costs, DOE restructured FutureGen. GAO was asked to examine (1) the original and restructured programs' goals, (2) similarities and differences between the new FutureGen and other DOE CCS programs, and (3) if the restructuring decision was based on sufficient information. GAO reviewed best practices for making programmatic decisions, FutureGen plans and budgets, and documents on the restructuring of FutureGen. GAO contacted DOE, industry partners, and experts. The original FutureGen program and the new restructured FutureGen program attempt to use CCS at coal-fired power plants to achieve near-zero CO2 emissions and to make CCS economically viable. However, they take different approaches that could affect CCS's commercial advancement. First, the original program aimed at developing knowledge about the integration of IGCC and CCS at one plant; in contrast, the new program could provide opportunities to learn about CCS at different plants, such as conventional ones that use pulverized coal generating technology. Second, the original program was operated by a nonprofit consortium of energy companies at one plant, while the new program called for CCS projects at multiple commercial plants. The new, restructured FutureGen differs from most DOE CCS programs. The new FutureGen would develop and integrate multiple CCS components at coal-fired plants (including CO2 capture, transportation, and storage underground). Other programs concentrate on only one CCS component and/or a related component (e.g., capture or capture and compression). However, Round III of DOE's Clean Coal Power Initiative (CCPI) is a cost-shared partnership with industry that funds commercial CCS demonstrations at new and existing coal-fired plants. The new FutureGen is most like CCPI in that both fund CCS commercial demonstrations at several plants to accelerate CCS deployment and require that participants bear 50 percent of the costs, but DOE expects the new FutureGen to have more funding for commercial demonstrations than CCPI. Moreover, the new FutureGen targets a higher amount of CO2 to be captured and stored (at least 1 million metric tons of CO2 annually per plant) than CCPI (300,000 metric tons). Contrary to best practices, DOE did not base its decision to restructure FutureGen on a comprehensive analysis of factors, such as the associated costs, benefits, and risks. DOE made its decision, largely, on the conclusion that costs for the original FutureGen had doubled and would escalate substantially. However, in its decision, DOE compared two cost estimates for the original FutureGen that were not comparable because DOE's $950 million estimate was in constant 2004 dollars and the $1.8 billion estimate of DOE's industry partners was inflated through 2017. As its restructuring decision did not consider a comprehensive analysis of costs, benefits, and risks, DOE has no assurance that the restructured FutureGen is the best option to advance CCS. In contrast to the restructuring decision, DOE's Office of Fossil Energy had identified and analyzed 13 options for incremental, cost-saving changes to the original program, such as reducing the CO2 capture requirement. While the Office of Fossil Energy did not consider all of these options to be viable, it either recommended or noted several of them for consideration, with potential savings ranging from $30 million to $55 million each.
Background Federal agencies, including DOD, are responsible for ensuring that their funds are expended in accordance with the purposes and limitations specified by the Congress. DOD Directive 7200.1 specifies the requirements for accounting and fund control systems that DOD activities are to follow to comply with congressional purposes and limitations. For example, the Directive states that these systems are to ensure that funds are used only for congressionally authorized purposes and that payments are not to exceed amounts available. In order to comply with legal and regulatory requirements, DOD organizations’ accounting and fund control systems must be able to record disbursements as expenditures of appropriations and as reductions of previously recorded obligations. Proper matching of disbursements with related obligations is necessary to ensure that the agency has reliable information on the amount of funds available for obligation and expenditure. DOD’s administrative control procedures are intended to prevent unauthorized disbursements and purchases and to ensure that DOD organizations do not obligate or spend more funds than the Congress has appropriated. These control procedures require DOD organizations to (1) commit or administratively reserve funds based on firm procurement directives, orders, requisitions, or requests, (2) record obligations in appropriation account(s) when they place an order, award a contract, or receive a service, and (3) match disbursements with the related obligations in the accounting records as payments are made. Certain DOD organizations commit and obligate funds, and other DOD offices generally disburse the funds. For example, during fiscal year 1993, the Columbus Center of the Defense Finance and Accounting Service (DFAS), one of DOD’s major contract paying activities, administered over 370,000 contracts valued at $482 billion and paid over 1 million invoices totaling $64 billion. Disbursing offices, such as DFAS-Columbus, are required to ensure that (1) payments are made only for goods and services authorized by purchase orders, contracts, or other authorizing documents, (2) the government received and accepted the goods and services, and (3) payment amounts are accurately computed. They are also responsible for ensuring that accounting data on payment supporting documents are complete and accurate. In order to match disbursements with the corresponding obligations and record the transactions into the accounting records, the disbursement data must flow from the activity making the disbursements (paying activity) to the activity responsible for matching the payment to its corresponding obligation and recording the transaction in the accounting records (accountable activity). The flow of data from the paying activity to the accountable activity varies by military service. For example, Air Force transactions generally flow from the paying activities through the DFAS-Denver Center to the Air Force accountable activities. Navy disbursements flow essentially the same way, except the data flows through the DFAS-Cleveland Center to the accountable activities. The Army uses a more simplified flow: its disbursements generally flow from the paying activity directly to the Army accountable activity if the paying activity is an Army activity or the DFAS-Columbus Center. Disbursements made for the Army by the Air Force, Navy, and non-DOD organizations flow through the DFAS-Indianapolis Center to the Army accountable activities. Scope and Methodology To measure DOD’s progress in resolving problem disbursements and evaluate DOD’s criteria for identifying and reporting on disbursements that could not be properly matched to obligations, we met with the DFAS officials responsible for identifying the universe of transactions to discuss and assess how they determined if all disbursements not properly matched with obligations were included and correctly reported. We also analyzed the criteria DOD used to establish the $19.1 billion as its benchmark for reporting progress in correcting disbursements not properly matched to obligations. We obtained the dollar values of disbursements discussed in this report from agency reports or compiled them from agency records. We did not verify the accuracy of disbursement data included in agency reports or records because the data consisted of hundreds of thousands of disbursement transactions. Consequently, we cannot provide any assurance that the $24.8 billion of problem disbursements that had not been properly matched to obligations as of June 30, 1994, are correct and that total problem disbursements will not prove to be greater when all the facts are known. To identify systemic control weaknesses that keep DOD from solving its disbursement problems, we reviewed audit reports and the Secretary of Defense’s fiscal year 1993 Annual Statement of Assurance under the Federal Managers’ Financial Integrity Act. To assess the DOD team’s progress in addressing these weaknesses, we spoke with team officials at DFAS centers and headquarters and reviewed various progress reports and other internal documents on disbursement problems and corrective actions taken or planned. We performed our work at the offices of the DOD Comptroller, Washington, D.C.; the Air Force Materiel Command, Dayton, Ohio; the Army Materiel Command, Alexandria, Virginia; Headquarters, DFAS, Arlington, Virginia; and the following DFAS Centers: Denver, Colorado; Indianapolis, Indiana; Kansas City, Missouri; Columbus, Ohio; and Cleveland, Ohio. Our work was performed between November 1993 and September 1994 in accordance with generally accepted government auditing standards. As agreed with your offices, we did not obtain written DOD comments on a draft of this report. However, we discussed the results of our review with officials from the DOD Comptroller’s Office and DFAS and have incorporated their views where appropriate. These officials generally agreed with our findings. DOD Has Made Some Progress in Reducing Problem Disbursement Transactions DOD reported that it had reduced its problem disbursements from the benchmark figure of $19.1 billion to $9.7 billion—a $9.4 billion reduction—as of June 30, 1994. Although we agree that DOD has made some progress, DOD’s reduction was actually $5.8 billion. We found that $3.6 billion of the reported reduction was for problem disbursements that the Navy reclassified as negative ULOs—which still must be resolved. According to the DOD team leader, prior to December 1993, one of the Navy’s accounting systems would not accept a payment transaction if it would result in a negative ULO. Instead, the Navy’s system would reject such transactions to a suspense file that included disbursements that had not been matched to obligations. When DOD established the June 1993 benchmark for Navy transactions, it included the amounts in the suspense file and—therefore, the Navy negative ULOs—in the $19.1 billion benchmark. In December 1993, the Navy changed its system procedures and began processing the rejected transactions as negative ULOs. At that time, DOD reclassified these transactions as negative ULOs and took them off the “problem” list. We informed DOD on several occasions that while these transactions were properly reclassified as negative ULOs, they should not have been removed from the “problem” list simply because they were reclassified from one category of problem transaction to another. Although the officials agreed that the negative ULOs were problem disbursements that still needed to be resolved, they continued to report them as resolved in measuring and reporting on DOD’s progress to the Senate Committee on Governmental Affairs and the House Committee on Government Operations, Legislation and National Security Subcommittee. For example, on April 12, 1994, the DOD Comptroller testified before the Senate Committee that as of February 1, 1994, DOD had reduced the $19.1 billion benchmark figure by $7.1 billion. However, our analysis showed that as of that date, $3.2 billion of unresolved Navy negative ULOs were included in the $7.1 billion reduction. DOD Team Did Not Initially Identify the Full Extent of Disbursement Problems Compounding DOD’s disbursement problem is the fact that the June 1993 benchmark, which is used to measure and report DOD’s progress, did not include all types of problem transactions. As a result, DOD significantly understated the magnitude of problem disbursements by billions of dollars. After we brought this to DOD officials’ attention, DOD began, in March 1994, to collect data on the problem disbursement transactions that were not originally included under the criteria DOD used to establish its benchmark. Using the revised criteria, our analysis showed that DOD’s records contained at least $24.8 billion in problem disbursement transactions as of June 30, 1994. DOD Comptroller officials stated that they plan to update the original benchmark figure to show DOD’s total disbursement problem but could not tell us when it would be reported. The original criteria DOD used to define problem disbursements and the revised criteria DOD began using in March 1994 differed in several ways. According to DOD Comptroller officials, this was DOD’s first effort to quantify its overall disbursement problem. In order to establish a benchmark amount in a reasonable period of time, DOD officials stated, they initially focused on known problems to establish their criteria for defining problem transactions. As a result, DOD did not identify certain categories of problem disbursements which should have been included to more fully disclose the extent of the disbursement problem. Specifically, as discussed in the following sections, DOD (1) used net rather than gross disbursement balances, which significantly lowered the amount reported, (2) did not include disbursement problems for certain activities in its figures, and (3) did not consider certain disbursement transactions if initial matching efforts had not been attempted. Netting Positive and Negative Balances Lowered Reported Totals We found that some DOD activities had been offsetting positive and negative balances, thus reporting lower amounts. For example, if a contract had $10 million of positive obligation balances and $15 million of negative obligation balances, the activity would net the two balances and report the $5 million of negative balances as problem disbursements. DOD’s progress report on disbursements showed that as of March 31, 1994, DFAS-Denver had $630 million of negative ULOs that were determined by netting positive and negative balances. At our request, DFAS-Denver developed a computer program to identify and summarize its negative ULO transactions without netting them with the positive amounts. Using this program, DFAS-Denver reran the March 1994 data and found that its negative ULO balances totaled about $7 billion, which was more than 11 times the $630 million of net balances reported to DFAS headquarters. Since DFAS-Denver did not previously maintain gross data on negative ULO balances, it could not provide us with any trend data prior to March 1994. However, DFAS-Denver officials acknowledged that they previously had always used net values in their reports to DFAS headquarters. After we brought this to the DFAS officials’ attention, DFAS-Denver included the gross negative ULO amounts in its June 1994 and subsequent progress reports. In April 1994, DFAS-Denver began to assign staff to review and correct the $7 billion of negative ULO balances. In total, 23 staff were eventually assigned to this task. As a result of this effort, DFAS-Denver reported that it had reduced its negative ULOs to about $2.1 billion as of June 30, 1994. Not All Activities Were Included When the team calculated the amount of problem disbursements, it did not include all DOD activities. According to the DOD team leader, the team focused its efforts only on the military services because it believed that the other activities did not have material disbursement problems comparable to those of the military services. Our review of April 30, 1994, disbursement data at the DFAS-Indianapolis Center identified 23,999 disbursements totaling about $2.4 billion at 36 activities other than the military services that had not been matched to an obligation. These amounts included 11,239 disbursements totaling about $1.5 billion that had remained unmatched for over a year. After we brought this to the DOD team leader’s attention, he agreed that other DOD activities should be included but did not know when or if DOD would begin to do so. Some Transactions Were Excluded DOD did not include disbursement transactions if no matching efforts had yet been attempted, regardless of how old the transactions were. The DOD team leader told us that they did not initially include these types of transactions since aging data were not readily available to show how long the transactions had remained outstanding. After our meeting with the DOD team leader, the DFAS Deputy Director for General Accounting issued a memorandum on April 21, 1994, that required the Centers to maintain aging schedules for disbursements. He stated that it should never take over 60 days for disbursement transactions to be forwarded to and received by an accountable station for matching. In response to the DFAS Deputy Director’s guidance, the DFAS Centers reported to DFAS headquarters in June 1994 that their records included at least $14.8 billion of disbursements as of April 30, 1994, that were over 60 days old and had not yet been received by an accountable station for matching with an obligation. Change in Fund Control Requirements Demands Accurate Records of Obligations and Disbursements An important facet of DOD’s disbursement problem is its inability to accurately account for and report on the amount of obligations and disbursements for old appropriation accounts. Of the $24.8 billion in problem disbursement transactions as of June 30, 1994, about $5 billion was related to canceled appropriations, originally called the “M” accounts. Currently, the DOD Comptroller’s office has not provided DOD components with any guidance on how to correct these problem disbursements in their accounting records. In 1990, the Congress changed the law for reporting on old appropriation accounts because it found that the controls over them were not working as intended. Specifically, DOD (which had most of the “M” and merged surplus authority accounts) had been expending funds from these accounts without sufficient assurance that authority for such expenditures existed or in ways that the Congress did not intend. The Congress was particularly concerned about (1) the large balances available to DOD in the “M” and merged surplus accounts, which totaled a reported $50 billion at the time of the new law, (2) DOD’s access to and routine use of hundreds of millions of dollars from the “M” accounts and merged surplus authority to cover contract cost increases, and (3) lack of congressional oversight over these accounts. The Congress passed Public Law 101-510 to strengthen its oversight and control over expired appropriations. The law, enacted on November 5, 1990, canceled the budget authority associated with obligations recorded in “M” accounts in stages, with the final cancellation occurring on September 30, 1993. The new law also required agencies to maintain records for each expired appropriation account reflecting obligated and unobligated balances by year for 5 years and cancellation of obligated and unobligated balances for other appropriation accounts 5 years after the budget authority expires regardless of whether or not goods or services contracted for have yet been provided or paid for. Accordingly, an additional year of appropriation accounts was canceled on September 30, 1994, and other appropriation accounts will be canceled each fiscal year thereafter as each rolling 5-year period ends. In order to ensure that obligations and expenditures do not exceed the amounts appropriated, agencies will have to maintain adequate records by fiscal year of obligated, unobligated, and expended balances of current, expired, and canceled budget authority. The Congress, aware of particular problems with DOD’s handling of its “M” accounts, also included in Public Law 101-510 a requirement that DOD audit all its “M” accounts by December 31, 1991. One purpose of the audit was to establish the total balances of “M” accounts. Therefore, to achieve this purpose, DOD would need to properly match disbursements with obligations associated with those accounts. In April 1993, we reported that as of September 30, 1992—or 9 months after the audit of its “M” accounts was to be completed and nearly 2 years after the passage of Public Law 101-510—DOD had still not finished its review of “M” account balances. At that time, DOD stated that it would complete its review by September 30, 1993—the date that the final “M” account balances were to be canceled. As that date approached, the Department of the Treasury sought our opinion on whether it was authorized to continue accepting agency adjustments to canceled DOD “M” account balances after the September 30, 1993, cutoff in order to properly record disbursements associated with those accounts. We stated that the Department of the Treasury could make these adjustments to reflect the completion of accounting transactions that occurred before the accounts were closed. In response to our opinion, the Department of the Treasury granted agencies until May 31, 1994, to post disbursements that had not been previously matched to obligations. In February 1994, we advised the Secretary of Defense that DOD still had billions of dollars of disbursements that had not been properly matched to obligations, many of which were in the “M” accounts, and asked how DOD planned to deal with the situation. In May 1994, the DOD Deputy Comptroller for Financial Systems responded that it was unlikely that DOD could identify and correct all disbursement problems by the May 31, 1994, deadline established by the Department of the Treasury. He also asserted that there was no avenue for correcting errors attributable to closed accounts after the deadline. We disagree with DOD’s assertion that it no longer has an avenue to resolve problem disbursements associated with its canceled “M” accounts. As a result of accounts being canceled under Public Law 101-510, agencies must find a source of funds to pay otherwise valid obligations that did not become payable until after the originally chargeable account was canceled. The law provides that with certain limitations, agencies may pay such obligations out of currently available appropriations. One limitation on this authority is that agencies must be able to demonstrate that there would have been enough unexpended funds in the canceled account to make the payment. Agencies could do this by showing that there is a sufficient unexpended balance in the Department of the Treasury account for the appropriation to make the payment. However, an account maintained by the Department of the Treasury ceases to exist once it is canceled under Public Law 101-510. Therefore, agencies will have to use their own accounting records as a basis for demonstrating that the payment could have been made from the canceled account. To do this, DOD must maintain current and accurate records of disbursements attributable to both the canceled “M” accounts and any subsequent canceled accounts in order to justify using current appropriations to make payments attributable to these previously closed accounts. Therefore, DOD must reconcile the about $5 billion of problem disbursements related to canceled “M” accounts that had not been properly matched to obligations as of June 30, 1994. This procedure is necessary to ensure that payments do not exceed the originally appropriated amounts and result in Antideficiency Act violations. DOD will also need to retain records of the transactions related to the canceled appropriation accounts until such time as it is determined that there are no longer any outstanding claims against the accounts. Correcting Internal Control and System Weaknesses Is Key to Resolving Long-standing Problems We have previously reported that long-standing system problems hinder DOD’s ability to properly match disbursements with obligations. Our current review confirmed that DOD continues to experience serious control weaknesses over its disbursement process. Continued management emphasis is essential if DOD is to correct the material weaknesses that cause negative ULOs and other disbursement problems. These weaknesses are cited in the Secretary of Defense’s fiscal year 1993 Annual Statement of Assurance under the Federal Managers’ Financial Integrity Act. In March 1994, the DOD Comptroller issued guidance aimed at reducing disbursements made in excess of recorded obligations. Over the long term, DOD is relying on its Corporate Information Management (CIM) initiative, which to date has had only limited success, to improve the existing procedures and systems involving DOD disbursements as well as the consistency of data in DOD’s financial management systems. DOD Comptroller Guidance on Disbursements in Excess of Obligations “the Department routinely writes checks on accounts that are in the red, under the assumption that these accounts are in the red because of innocent accounting errors...Even when accounts have been in a deficit status for some time, Department procedures permit continued expenditure of funds against those negative balances...Such practices are clearly contradictory to the Antideficiency Act and flatly violate minimum standards of sound financial management.” The Comptroller also noted that DOD had accepted the idea that negative balances were caused by errors and that few people felt responsible for correcting the problem. He said that DOD had 23 appropriation accounts that were in the red as of December 31, 1993, and that he had asked the DOD Inspector General to initiate investigations for 10 potential Antideficiency Act violations. According to a DOD Inspector General official, three of the investigations had been completed as of August 1994 and, in all three cases, the reported deficiencies had been caused by accounting errors which, when corrected, had not resulted in Antideficiency Act violations. The official advised us that the Inspector General was completing the remaining investigations but could not tell us when the investigations would be done. We believe that the Comptroller’s guidance is a first step in addressing DOD’s long-standing disbursement problems. DOD must make every effort to ensure that this guidance is properly implemented. However, this will not be an easy task given DOD’s years of sloppy bookkeeping, the failure to observe and enforce rudimentary control features over its disbursement process, and serious deficiencies in DOD’s contract pay and accounting systems. For example, we found that DOD continues to experience serious internal control weaknesses over one of its computer-based disbursement systems, the Mechanization of Contract Administration Services (MOCAS), which is used to administer contracts and pay invoices at DFAS-Columbus. MOCAS records showed that as of June 30, 1994, 2,551 contracts were overdisbursed by a total of $1 billion. This was a $612 million increase in overdisbursed contracts at that location since July 1993. DFAS-Columbus officials told us that they did not have any reports to show how long the contracts had been in a negative status. They also stated that most of the overdisbursements were probably caused by errors in recording the disbursements or obligations. They noted that until detailed reviews of the 2,551 contracts were completed, they would not know how much, if any, of the $1 billion of recorded overdisbursements resulted from overpayments or whether they were caused by accounting errors. We agree that each of the 2,551 contracts will have to be reviewed to determine if contractors were overpaid. We also believe that there is a good possibility that some overpayments may have occurred. For example, we recently reported that during a 6-month period, hundreds of contractors returned about 4,000 checks totaling $751 million to DFAS-Columbus. The $751 million included $305 million of overpayments, virtually all of which were voluntarily returned by the contractors. Systems Weaknesses Will Not Be Corrected for Years DOD has cited the CIM initiative as the long-term solution to its systems problems. However, CIM system improvements will not be implemented for years and DOD has not yet established a milestone date for completing the CIM work related to its contract payment and accounting systems. According to DFAS officials and the June 10, 1994, Financial Systems Plan (which is DFAS’s overall plan to reduce the number of financial systems), work being performed under the overall CIM concept is expected to help resolve DOD’s disbursement problems. The officials stated that DOD’s goal is to have one overall system which would support both the contract pay and accounting functions. These officials also stated that completing the work successfully in several areas will be key to achieving the overall system goal. These areas include the following: Reducing the number of computer data elements for finance and accounting. Currently, there are over 100,000 data elements in over 250 DOD finance and accounting systems. As of July 1994, DFAS determined that the finance and accounting community needed only 778 data elements, of which 256 have already been approved for use. Eliminating duplication of work through single source data entry. Since the data will be entered only once to satisfy the needs of all functional areas, including the acquisition and financial areas, this should help reduce errors, such as transposing numbers. We agree with DOD that standardizing accounting information and related procedures should make processing disbursement transactions less complex and will help eliminate DOD’s disbursement problems. However, this will be a difficult task since DOD’s disbursing process is very complex and decentralized. For example, in the DOD Comptroller’s April 12, 1994, testimony before the Senate Committee on Governmental Affairs, he noted that the purchase of an F-18 aircraft can take as many as 105 paper transactions and involve different functional areas, such as budgeting, contracting, and accounting. According to the Comptroller, the purchasing process involves separate chain-of-command organizations that must work together to accomplish the tasks, and an honest mistake in any one of the 105 paper transactions can produce inconsistencies that require extensive manual research to resolve. In light of the problems DOD has encountered in developing and improving its systems over the years, accomplishing CIM’s objectives will be a long-term effort. Therefore, DOD will have to continue to rely on existing systems to provide information on the amount of funds appropriated, obligated, and disbursed and to report this information to the appropriate congressional committees. Thus, DOD needs to pursue short-term efforts to improve the quality of the information in its systems. This can be as simple as complying with existing guidance and procedural requirements for (1) recording obligations prior to making contract payments, (2) detecting and correcting errors in the disbursement process, and (3) posting accurate and complete accounting information in systems that support the disbursement process. Otherwise, the problems DOD has encountered in properly matching disbursements with obligations will continue. Conclusions Despite numerous audit reports over the last 14 years that repeatedly identified DOD’s internal control weaknesses, DOD continues to experience serious problems in accounting for disbursements. Not being able to properly match a disbursement to an obligation is a serious, fundamental breakdown in internal controls and DOD’s fund control systems. Although DOD has taken some initial steps to reduce its disbursement problems, serious weaknesses still exist in DOD’s systems, as evidenced by the $24.8 billion of problem disbursement transactions identified as of June 30, 1994. Intensified and sustained top-level management commitment, as called for by the DOD Comptroller, will be needed to resolve the disbursement problem. In the short term, DOD’s efforts, including its manual research of problem disbursement transactions to correct errors, will likely reduce the amount of disbursements not properly matched to obligations. However, the disbursement problem will not be adequately resolved until (1) weaknesses in control procedures that allow problem disbursements to occur are corrected and (2) improvements are made to DOD’s contract pay and accounting systems. Recommendations We recommend that the DOD Comptroller establish a new benchmark to measure progress in reducing DOD’s disbursement problems. At a minimum, the new benchmark figure should (1) include all DOD activities, (2) be comprised of gross values, and (3) include all disbursements that have not been received by an accountable activity for matching with an obligation within 60 days after a payment is made. We also recommend that the DOD Comptroller enforce existing regulations requiring DOD activities to record obligations and disbursements in an accurate and timely manner and require DOD activities to reconcile disbursements that have not been properly matched to obligations and correct its accounting records for all appropriation accounts, including those accounts that have been closed. We recommend that the DFAS Director require DFAS-Columbus to resolve the $1 billion of negative balances for the 2,551 contracts in MOCAS report monthly to DFAS headquarters on the number of contracts that have expenditures in excess of obligations. The report should include (1) an aging schedule to show specific periods of time that the contracts and the related dollar values have been in a negative status, (2) the number and dollar amounts of new overdisbursed contracts added during the period, and (3) the number and dollar amounts of overdisbursed contracts resolved during the period. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix I. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Cincinnati Regional Office Related GAO Products DOD Procurement: Overpayments and Underpayments at Selected Contractors Show Major Problem (GAO/NSIAD-94-245, August 5, 1994). DOD Procurement: Millions in Overpayments Returned by DOD Contractors (GAO/NSIAD-94-106, March 14, 1994). Financial Management: Strong Leadership Needed to Improve Army’s Financial Accountability (GAO/AIMD-94-12, December 22, 1993). Financial Audit: Examination of the Army’s Financial Statements for Fiscal Years 1992 and 1991 (GAO/AIMD-93-1, June 30, 1993). Financial Management: Navy Records Contain Billions of Dollars in Unmatched Disbursements (GAO/AFMD-93-21, June 9, 1993). Financial Management: Air Force Systems Command Is Unaware of Status of Negative Unliquidated Obligations (GAO/AFMD-91-42, August 29, 1991). Financial Management: Problems in Accounting for DOD Disbursements (GAO/AFMD-91-9, November 9, 1990). Financial Management: Army Records Contain Millions of Dollars in Negative Unliquidated Obligations (GAO/AFMD-90-41, May 2, 1990). Financial Audit: Air Force Does Not Effectively Account for Billions of Dollars of Resources (GAO/AFMD-90-23, February 23, 1990). Financial Management: Air Force Records Contain $512 Million in Negative Unliquidated Obligations (GAO/AFMD-89-78, June 30, 1989). Management Review: Progress and Challenges at the Defense Logistics Agency (GAO/NSIAD-86-64, April 7, 1986). Defense’s Accounting For Its Contracts Has Too Many Errors—Standardized Accounting Procedures Are Needed (FGMSD-80-10, January 9, 1980). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts to properly match disbursements with corresponding obligations, focusing on: (1) DOD progress in resolving its disbursement problems; (2) DOD criteria for identifying and reporting improperly matched disbursements; and (3) systemic control weaknesses that prevent DOD from resolving disbursement problems. GAO found that: (1) as of June 30, 1994, DOD reduced its $19.1 billion of reported problem disbursements by $9.4 billion; (2) the $3.6 billion that DOD reclassified as negative unliquidated obligations still need to be resolved; (3) the DOD $19.1-billion benchmark for problem disbursements was understated because DOD criteria for identifying and reporting disbursements did not include all types of problem disbursements; (4) as of June 30, 1994, DOD records contained at least $24.8 billion in problem disbursements, including $5 billion in unreconciled "M" account balances; (5) DOD plans to revise its benchmark figure to reflect its disbursement problems more accurately, but it has not set a date for reporting the revised data; (6) DOD needs to maintain current and accurate disbursement records to justify using current appropriations to pay for obligations from closed accounts; (7) in March 1994, the DOD Inspector General issued guidance to stop DOD from disbursing funds in excess of recorded obligations and account balances; (8) DOD initiatives to resolve its disbursement problems include identifying the extent of the problem and improving its contract pay and accounting systems; and (9) DOD needs to emphasize and enforce sound accounting principles and internal controls to properly match disbursements with obligations.
Fewer Small Employers Claimed the Credit Than Were Thought to Be Eligible Claims of the small employer health tax credit have continued to be lower than thought eligible by government agency and small business group estimates, limiting the effect of the credit on expanding health insurance coverage through small employers. In 2014, about 181,000 employers claimed the credit, down somewhat from 2010 (see figure 1). These numbers are relatively low compared to the number of employers thought eligible for the credit. In 2012, we reported that selected estimates of the number of employers eligible ranged from about 1.4 million to 4 million. The Council of Economic Advisors estimated 4 million and the Small Business Administration (SBA) estimated 2.6 million. Estimates made by small business groups included the Small Business Majority and the National Federation of Independent Businesses. Their estimates were 4 million and 1.4 million, respectively. A similar outcome is seen when the dollar value of credits claimed is compared to initial estimates. In 2010, claims totaled $468 million compared to initial estimates of $2 billion by the Congressional Budget Office (CBO) and the Joint Committee on Taxation (JCT). In March 2012, CBO and JCT estimated that the credit would cost $1 billion in 2012 and $21 billion from 2012 to 2021, down considerably from the original estimate of $5 billion and $40 billion, respectively. The revised estimates appear overstated as well given that actual claims for the credit in 2013 and 2014 were about $511 million and $541 million, respectively. Small Employers Have Been Unlikely to Claim the Health Tax Credit for Various Reasons Maximum Small Employer Credit Amount is Too Small Based on our interviews, discussion groups, and literature review conducted for the 2012 report, we found the small employer health tax credit has not provided a strong enough incentive for employers to begin to offer health insurance for various reasons, as discussed below. The maximum amount of the credit does not appear to be a large enough incentive to get employers to offer or maintain insurance. For example, the maximum amount is available to small businesses with 10 or fewer FTE employees that pay an average of $25,900 or less in wages in tax year 2016 (adjusted for inflation in future years). Such an employer could be eligible for a credit worth up to 50 percent of the premiums paid. These employers did not consider the maximum credit amount to generally be high enough, and the amount tended to be less than the maximum, as discussed below. Few Small Employers Qualify for Maximum Small Employer Credit Amount Most small employer credit claims are likely to be for less than the maximum credit percentage. To illustrate, our 2012 report analyzed how many of the approximately 170,300 small employers making claims for tax year 2010 could claim the full credit. As figure 2 shows, only 28,100—17 percent—could use the full credit percentage. Usually, employers could not meet the average wage requirement to claim the full percentage, as 115,500—68 percent—did not qualify based on wages, but did meet the FTE requirement. To the extent that a small employer qualifies to claim the credit, the employer may not be able to fully claim the credit amount for the tax year. For tax-exempt employers, the credit amount claimed cannot exceed the total amount of the employer’s payroll taxes for the calendar year. For other small employers such as small businesses, the credit is not refundable but is limited to the actual income tax liability. If a small business had a year in which it ended up paying no taxes (i.e., it had no taxable income after accounting for all its other deductions and credits), then the small business tax credit could not be used for that year as there would be no income tax for the credit to reduce. Certain Credit Design Features Reduce the Amount of Credit That Can Be Claimed Credit Amount is “Phased Out” The credit amount that can be claimed “phases out” to zero as employers employ up to 25 FTE employees at higher wages—up to an average of $51,800 for 2016. Table 1 shows the phasing out of the tax credit amount we calculated for a tax-exempt employer’s contribution to health insurance in 2016. Table 2 shows the phasing out for other small employers in 2016. The amount of the credit is also reduced if premiums paid by an employer are more than the average premiums for the small group market in the state in which the employer offers insurance. The credit percentage is multiplied by the allowable premium to calculate the dollar amount of credit claimed. For example, if the state average premium is $4,441 for a single employee, but a small employer in that state paid $5,000 for an employee’s health premium, the credit would be calculated using the state average premium of $4,441 rather than the $5,000. According to IRS data, this cap reduced the credit for around 30 percent of employer claims as of 2012. Credit is Temporary Regardless of the allowable credit amount, small employers can claim the credit for just two consecutive years after 2013, which detracts from the incentive for small employers to begin offering coverage. Employers are reluctant to provide a benefit to employees that would be at risk of being taken away later when the credit is no longer available. As of 2014, the two consecutive tax years for credit claims starts with the first year a qualified employer obtains coverage through a SHOP exchange. In other words, if a qualified employer first obtains coverage through a SHOP exchange in 2016, the credit would only be available to the employer in 2016 and in 2017. From 2010 through 2013, the credit was available to qualifying employers that purchased coverage in the small group market outside of SHOP exchanges, which were first established in 2014. Receipt of the credit for any years between 2010 and 2013 does not disqualify an employer from receiving the credit in 2014 and in subsequent years. Costs and Complexity Deter Credit Claims Small employers have not viewed the credit as a sufficient incentive to begin offering health insurance because the credit amount may not offset costs enough to justify the cost for health insurance premiums. In addition, our 2012 report described how small business owners generally do not want to spend the time or money to gather the necessary information to calculate the credit, given that the credit will likely be insubstantial. Tax preparers told us it could take 2 to 8 hours or possibly longer to gather the necessary information to calculate the credit and that the tax preparers spent, in general, 3 to 5 hours calculating the credit. To the extent that preparers did these tasks, small employers would generally incur additional cost for these services. For example, a major complaint we heard in discussion groups with employers, tax preparers, and insurance brokers centered on gathering information on FTEs and the related health insurance premiums. Eligible employers reportedly did not have the number of hours worked for each employee readily available to calculate FTEs and their associated average annual wages nor did they have the required health insurance information for each employee readily available. Our 2012 report also noted that the complexity involved in claiming the tax credit was significant, deterring small employers from claiming it. The complexity arises not only from the various data that must be recorded and collected (as just described), but also from the various eligibility requirements in the design of the credit and number of worksheets to be completed. To determine eligibility requirements, exclusions from the definition of “employee” and from other rules make the calculations complex. For calculating the number of FTEs and their wages, workers excluded from the definition of employee are seasonal workers (an employee who works no more than 120 days during the year), a self-employed individual, a 2 percent shareholder in an S-corporation, a 5 percent owner of an eligible small business, or someone who is related to or a dependent of these people. While seasonal workers are excluded from FTE counts, insurance premiums paid on their behalf count toward the tax credit. In determining premiums paid by the employer, the rules exclude employer contributions to health reimbursement arrangements, health flexible spending accounts, or health savings accounts. Similarly, an employer’s premium payments exclude tobacco surcharges if an issuer charges higher premiums for tobacco users. As for the complexity of the worksheets and paperwork to be completed to claim the credit, in 2012, tax preparers told us that they thought that IRS did the best it could with the Form 8941 given the credit’s complexity. IRS officials said they did not receive criticism about Form 8941 itself but did hear that the instructions and its seven worksheets were too long and cumbersome for some claimants and tax preparers. On its website, as of 2012, IRS tried to reduce the burden on taxpayers by offering “3 Simple Steps” as a screening tool to help taxpayers determine whether they might be eligible for the credit. However, to calculate the actual dollars that can be claimed, we found in 2012 that the three steps become 15 calculations, 11 of which are based on seven worksheets, some of which require multiple columns of information. Given the effort involved to make a claim and the uncertainty about the credit amounts, our 2012 report discussed the view that having a way to quickly estimate employers’ eligibility for the credit and the amount they might receive would help them decide whether the credit would be worth the effort. However, we also noted in 2012 that this would not reduce the complication of finding all the documentation needed to file Form 8941. Further, some employers may believe they are eligible based on a calculator, but then turn out to be ineligible, or find they are eligible for a smaller credit amount when they complete Form 8941 with all the required information. IRS’s Taxpayer Advocate Service developed a calculator in 2012 to quickly estimate an employer’s eligibility, but this still requires gathering information such as wages, FTEs, and insurance plans. Our analysis showed that use of this tool peaked in March 2014 with 5,383 uses, and has declined since then, falling to less than 1,000 uses by February 2016. The Centers for Medicare & Medicaid Services officials said they launched a SHOP Small Business Health Care Tax Credit Estimator on the federal exchange website in early 2014 to help employers determine if they qualify for the tax credit as well as the size of the credit they might receive. Lack of Awareness May Contribute to Low Credit Claims, Although IRS Engaged in Significant Outreach Many small businesses reported that they were unaware of the credit, as discussed in our 2012 report. The National Federation of Independent Businesses Research Foundation and the Kaiser Family Foundation both estimated that about half of small businesses were aware of the credit as of May 2011. The extent to which the lack of awareness prevented eligible employers from claiming the credit is unknown, particularly given other reasons for not claiming the credit. Further, a number of small business employers would not be eligible for the credit regardless of their awareness. Even if employers were unaware, their accountants or tax preparers may have been aware, but did not inform their clients because they did not believe their clients would qualify or because the credit amount would be very small. To raise initial awareness of the credit, IRS conducted significant outreach, as discussed in our 2012 report. First, IRS developed a communication and outreach plan, written materials on the credit, a video, and a website. Second, IRS officials reached out to interest groups about the credit and developed a list of target audiences and presentation topics. IRS officials began speaking at events in April 2010 to discuss the credit and attended more than 1,500 in-person or web-based events from April 2010 to February 2012. Discussion of the credit at the events varied from being a portion of a presentation covering many topics to some events that focused on the credit. When we issued our 2012 report, IRS did not know whether its outreach efforts increased awareness of the credit or were otherwise cost effective. It would be challenging however to estimate the impact of IRS’s outreach efforts on awareness with a rigorous methodology. As we reported in 2012, based on feedback they received, IRS officials told us they believe their efforts have been worthwhile and used this feedback to expand its outreach to include insurance brokers in 2012. IRS also issued a press release in 2014 to urge small employers to consider claiming the tax credit. Addressing Factors and Expanding Credit Use Could Require Substantive Design Changes Our 2012 report discussed ways that the design of the credit could be altered to spur use of the tax credit. Given that most small employers do not offer insurance and that the credit may be too small an incentive to convince employers to provide health insurance, we found that it may not be possible to significantly expand use of the credit without changing its design. Amending the eligibility requirements or increasing the amount of the credit may allow more businesses to claim the credit, but as we noted in 2012, these changes would increase its cost to the federal government. Options for changing the design of the credit include the following: increasing the amount of the full credit, the partial credit, or both; increasing the amount of the credit for some by eliminating state premium averages; expanding eligibility requirements by increasing the eligible number of FTEs and wage limit for employers to claim the partial credit, the full credit, or both; or simplifying the credit calculation by (1) using the number of employees and wage information already reported on the employer’s tax return, which could reduce the amount of data gathering as well as credit calculations because eligibility would be based on the number of employees rather than FTEs; and (2) offering a flat credit amount per FTE (or per employee) rather than a percentage. A tradeoff inherent in these changes would be to reduce the precision in targeting the credit. Administration and Legislative Proposals to Change the Design and Status of the Credit The administration has offered proposals to alter the small employer health tax credit. The most recent proposal as of February 2016, would (1) expand eligible employers to include those with up to 50 FTEs; (2) begin the phase out at 20 FTEs; (3) provide for a more gradual phase-out based on average wage and number of employees; (4) eliminate the requirement that an employer make a uniform contribution for each employee (although nondiscrimination laws will still apply); and (5) eliminate the limit imposed by the area average premium. Between 2011 and 2015, Congress has considered more than 20 bills on the small employer health tax credit. Many offered ways to expand usage of the credit. For example, the bills sought to increase the number of eligible small employers (e.g., allowing an employer to have 50 FTEs); changing the phase out formula; allowing the credit to be claimed in more than two consecutive years; increasing the average annual wage limitation; eliminating the requirement that employers contribute the same percentage of cost of each employee’s health insurance; eliminating the cap limiting the credit amount to average premiums paid to a state health insurance exchange; and allowing a partial credit for health insurance purchased outside of SHOP exchanges. Some of these proposed bills restricted the use of the credit for abortion coverage. At least one would have eliminated the credit and a few offered alternatives to the credit. In closing, the Small Employer Health Insurance Tax Credit was intended to offer an incentive for small, low-wage employers to provide health insurance. However, utilization of the credit has been lower than expected, with the available evidence suggesting that the design of the credit is a large part of the reason why. While the credit could be redesigned, such changes come with trade-offs. Changing the credit to expand eligibility or make it more generous would increase the revenue loss to the federal government. Chairman Huelskamp, Ranking Member Chu, and members of the Subcommittee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact James R. McTigue, Jr., Director, Tax Issues, Strategic Issues, (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony, or previous related work, are Tom Short, Assistant Director; Anna Bonelli, Amy Bowser, Leia Dickerson, Ed Nannenhorn, Robert Robinson, Cynthia Saunders, Lindsay Swenson, and Jason Vassilicos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Many small employers do not offer health insurance. The Small Employer Health Insurance Tax Credit was established as part of the Patient Protection and Affordable Care Act to help eligible small employers—businesses or tax-exempt entities—provide health insurance for employees. The base of the credit is premiums paid or the average premium for an employer's state if premiums paid were higher. In 2016, for small businesses, the credit is 50 percent of the base unless the business had more than 10 FTE employees or paid average annual wages over $25,900. This statement summarizes and updates GAO's prior work in May 2012, November 2014, and March 2015 on the extent to which the credit is claimed, any reasons that limit claims, and changes to the credit proposed by Congress and the administration. To conduct the updates, GAO reviewed 2013 and 2014 IRS data on credit claims and academic and government studies, and summarized proposed legislation related to the credit. Claims of the small employer health tax credit have continued to be lower than thought eligible by government agency and small business group estimates, limiting the effect of the credit on expanding health insurance coverage through small employers. In 2014, about 181,000 employers claimed the credit, down somewhat from 2010 (see figure). These numbers are relatively low compared to the number of employers eligible for the credit. In 2012, GAO reported that selected estimates of the number of employers eligible ranged from about 1.4 million to 4 million. In 2010, claims totaled $468 million compared to initial estimates of $2 billion by the Congressional Budget Office and the Joint Committee on Taxation. Actual claims for the credit in 2013 and 2014 increased slightly to about $511 million and $541 million, respectively. The small employer health tax credit has not been widely claimed for a variety of reasons, as GAO reported in May 2012. The maximum amount of the credit does not appear to be a large enough incentive for employers to offer or maintain insurance. Also, few small employers qualify for the maximum credit amount. For those employers who do claim the credit, the credit amount “phases out” to zero as employers employ up to 25 full time equivalent (FTE) employees at higher wages. The amount of the credit is also limited if premiums paid by an employer are more than the average premiums for the small group market in the employer's state. Furthermore, the credit can only be claimed for two consecutive years after 2013. GAO also found that the cost and complexity involved in claiming the tax credit was significant, deterring small employers from claiming it. Many small businesses have also reported that they were unaware of the credit. Even so, the Internal Revenue Service (IRS) had been taking steps since April 2010 to raise awareness about the credit and reduce the burden on taxpayers by offering tools to help taxpayers determine eligibility for the credit. Congress and the administration have proposed a number of changes to the credit. These include expanding the size of eligible employers, altering the phase out rules, and allowing the credit to be claimed in more than two consecutive years. Amending the eligibility requirements or increasing the amount of the credit may allow more businesses to claim the credit. However, these changes would increase its cost to the federal government.
Background DOD defines infrastructure as activities that generally operate from fixed locations to support missions like those carried out by combat forces. Infrastructure includes installation support; central training; central medical; central logistics; force management; acquisition; infrastructure; central personnel; and central command, control, and communications. DOD recognizes that its support structure is inefficient and that its costs continue to absorb a large share of the defense budget and diverts funding that could be used for modernization. DOD has implemented various reform initiatives in the past to achieve efficiencies and reduce costs. The Defense Management Review (DMR), base realignment and closure (BRAC) process, National Performance Review, the bottom-up review, and other efforts proposed various actions intended to achieve these objectives. More recently, the Commission on Roles and Missions (CORM) and the Defense Science Board (DSB) have identified similar problems with DOD’s support structure and processes, but have made outsourcing and privatization the centerpiece of their reforms to reduce infrastructure and support costs. DOD defines outsourcing as the transfer of functions performed inhouse to outside providers and privatization as the transfer or sale of government assets to the private sector. Between fiscal year 1997 and 2002, DOD plans to increase procurement funding from $44.1 billion to $68.3 billion, primarily to buy new weapon systems and upgrade existing systems. DOD hopes that initiatives to reduce infrastructure costs will provide much of the increased procurement funding. Initiatives to achieve infrastructure and support savings include outsourcing and privatization, acquisition reforms, organizational streamlining and consolidation, management process reengineering, base and facility closures, personnel reductions, and inventory reductions. DOD’s quadrennial review is likely to identify additional plans and initiatives for reducing infrastructure costs. If savings from these initiatives are not achieved and the defense budget remains relatively constant, planned weapon systems procurements may have to be delayed, stretched out, or canceled; the force structure may have to be further reduced; and/or compromises may have to be made in military readiness. DOD Initiatives Have Achieved Less Savings Than Projected “Our infrastructure was reduced by less than 21 percent after four BRAC rounds, while the force structure fell by 40 percent. This disparity has introduced organizational inefficiencies that drive up O&M costs, making it more difficult for us to give the taxpayers best value for the dollars that we invest in national security . . . . Funding for military construction and real property maintenance by contract has been cut to the bone, and perhaps beyond. We are being pressed hard to find the resources to maintain our mission essential facilities.” Our work has shown that several factors have limited DOD’s success in implementing prior initiatives and achieving the expected savings. DOD officials have repeatedly recognized the importance of using resources for the highest priority operational and investment needs rather than maintaining unneeded property, facilities, and overhead. However, DOD has found infrastructure reductions to be difficult and painful because they require up-front investments, the closure of installations, and the elimination of military and civilian jobs. Service parochialism, a cultural resistance to change, and congressional and public concern about the economic effects on local communities, as well as the fairness of closure decisions, have historically hindered DOD’s ability to close or realign bases. DOD has also recognized that streamlining and reengineering its business practices could result in savings, but it has made limited progress in doing so. Many opportunities exist for consolidating and streamlining the services support functions and activities. Unfortunately, DOD has eliminated people and reduced funding without ensuring that the initiatives have achieved the intended efficiencies. In some cases people were reduced without redesigning the function or activity to reduce the staffing needs. While DOD gained some efficiencies through this approach it could have done better by thoroughly and thoughtfully analyzing what work had to be done and where and by whom the work could most cost-effectively be done. DMR Savings Are Less Than Projected The 1989 DMR proposed a program of consolidations and management improvements estimated to save tens of billions of dollars in support and overhead programs and eliminate an estimated 42,900 civilian and military positions over fiscal years 1991-95. The review resulted in 250 decisions to implement consolidations, improve information systems, enhance management, and employ better business practices. The projected savings from each decision ranged from a few million dollars to over $10 billion. Early in the program, DOD made several adjustments that included reducing savings projections, extending the savings period for 2 years, and identifying additional savings associated with new initiatives. As a result, in April 1992, DOD projected DMR savings to be $71.1 billion for fiscal years 1991-97 (Air Force, $22.5 billion; Navy, $21.6 billion; Army, $20.9 billion; and DOD agencies, $6.1 billion). In early 1993, DOD Comptroller officials estimated that changes to the future years defense program (FYDP), force reductions, and workload reductions could result in total savings of about $62.8 billion rather than $71.1 billion. The savings reductions, however, could not be tracked to specific DMR initiatives. In reviewing initial savings estimates, we found that the estimates were not always based on cost analyses supported by historical facts or empirical cost data. During our 1993 review of the DMR we found it difficult to validate and track savings to specific initiatives. Moreover, we could not easily determine the extent to which savings resulted from the initiatives or from other factors such as reduced workloads, changes in force structure, or defense downsizing. For example, one initiative claimed savings of $5.6 billion for the possible consolidation of supply depots, inventory control points, maintenance depots, automatic data processing design centers and operations, accounting operations and finance centers, and research and development laboratories and test facilities. However, most of these consolidations did not occur. Likewise, an initiative to develop standard and consolidated automated data processing systems throughout DOD was not accomplished. Finally, a DSB task force known as the Odeen panel evaluated the DMR savings projections and concluded there could be a $12.6-billion to $16.7-billion shortfall between the DOD’s 1992 budget projections for fiscal years 1994-97. The panel also projected additional potential budget shortfalls of $7.4 billion to $9.8 billion in fiscal years 1998 and 1999. Overall, it projected an estimated shortfall for the 1994-99 FYDP could be between $20 billion and $26.5 billion and that a shortfall existed in operation and maintenance (O&M) funding. We reported in 1994 that the panel’s estimated budget shortfall was generally low. BRAC Savings Are Less Than Projected Through the BRAC process initiated in 1988, DOD has closed or is closing 97 domestic bases. About 50 percent of the planned closures have been completed. DOD reported that last year, for the first time, the savings from closures exceeded the costs and savings from closures will continue to accumulate each year. DOD expected its base closures to reduce annual base support costs from $41 billion in 1988 to $29.5 billion in 1997. Our analysis of base support costs in the FYDP and at nine closing installations indicates that BRAC savings should be substantial. However, the total amount of actual savings is uncertain because DOD’s systems do not provide this information. If DOD BRAC savings are less than estimated, DOD’s ability to fund future programs at planned levels will be affected. DOD has submitted annual BRAC cost and savings estimates, but their usefulness is limited. For example, the Air Force’s savings were not based on budget-quality data, and the Army’s estimates excluded reduced military personnel costs that the Navy and the Air Force included in their estimates. Further, BRAC cost estimates excluded more than $781 million in economic assistance to local communities as well as other costs. Consequently, the Congress does not have a complete picture of projected BRAC savings. Outsourcing Using OMB Circular A-76 Procedures For the past three decades, federal agencies have been encouraged to expand their procurements of goods and services from the private sector. For years, DOD has been contracting out functions, activities, and services it formerly accomplished using DOD civilian or military personnel. OMB established procedures for determining whether commercial activities should be outsourced in Circular A-76. These procedures, which have been the primary vehicle used to make these outsourcing decisions, include a handbook for performing cost-effectiveness evaluations. However, these procedures and various provisions of legislation have, to some extent, limited DOD outsourcing and resulting savings. Using the A-76 process and procedures, DOD has conducted over 2,100 public-private competitions between 1978 and 1994. However, starting in 1988, the number of A-76 studies undertaken each year began to decline substantially. Several legislative provisions limited DOD’s outsourcing efforts. For example, the first provision, contained in the National Defense Authorization Act for fiscal years 1988-89 (P.L. 100-180), gave authority to installation commanders to determine whether to study activities for potential outsourcing. Because of disruptions to their workforce, the cost of conducting studies, and a desire for more direct control of their workforce, various officials told us that commanders often chose not to pursue outsourcing. This law, which was known as the “Nichols Amendment” and codified at 10 U.S.C. 2468, was effective through September 30, 1995. Another provision, contained in the DOD Appropriations Act for Fiscal Year 1991 (P.L. 101-511) and subsequent DOD appropriations acts, prohibited funding for lengthy A-76 studies. Finally, a provision contained in the Department of Defense Authorization Acts for Fiscal Years 1993 and 1994 prohibited DOD from entering into contracts resulting from cost studies done under OMB Circular A-76. In response, DOD imposed a moratorium on A-76 studies and canceled about three-quarters of the ongoing studies. The prohibition expired on April 1, 1994, and DOD subsequently lifted the moratorium. These provisions, along with the Nichols Amendment, had the effect of limiting outsourcing in most of the services until 1996. In 1996, OMB revised its supplemental handbook in an effort to streamline and improve the outsourcing process. Fundamental to determining whether or not to outsource is the identification of core functions and activities that DOD should continue to do. OMB Circular A-76 characterizes core as those activities that are “inherently governmental.” Under 10 U.S.C. 2464, the Secretary of Defense is required to define DOD’s core functions, which are not to be contracted out. DOD has proposed a risk assessment process to be used in identifying core depot maintenance requirements and each of the services is in the process of determining its core requirements using its own procedures. DOD has not defined a core process or identified core requirements for other logistics functions. It is not clear how the process will be used as DOD increases outsourcing using OMB Circular A-76 and other procedures. The A-76 competitions done through 1994 mostly involved low-skilled work such as commissary operations, family housing and grounds maintenance, administrative and custodial services, and food and guard service. These activities generally involved low capital investment, unskilled labor, and could be defined by relatively simple and straight-forward requirements statements. The competitions generally elicited vigorous competition. About 50 percent of the 2,100 competitions were won by the public sector. Concerns About Projected Savings From A-76 Studies For various reasons, we are concerned that projected savings of 20 to 40 percent reported from prior A-76 competitions are not reliable. For example, our 1990 evaluation of DOD savings data showed that neither DOD nor OMB had reliable data on which to assess the soundness of savings estimates. Also, DOD and OMB did not know the extent to which expected savings were realized because DOD did not routinely collect and analyze cost information to track savings after a cost study had been done.Further, we have reported that (1) savings estimates represent projected, rather than realized savings; (2) the costs of the competitions were not included; (3) baseline cost estimates are lost over time; (4) actual savings have not been tracked; (5) where audited, projected savings have not been achieved; and (6) in some cases, work contracted out was more expensive than estimated before privatization. One commercial activity—depot maintenance—has in most cases been exempted from the A-76 process by 10 U.S.C. 2469. However, DOD implemented a similar program for depot maintenance activities beginning in 1985, when the Congress authorized in the DOD Appropriations Act for Fiscal Year 1985 (P.L. 98-473), a test program to allow public and private shipyards to compete for the overhaul of selected ships on the basis of cost comparisons. The program was later expanded to cover Naval aviation, Air Force, and Army depot maintenance. In 1994, DOD terminated its public-private competition program for depot maintenance. We reported that the program had resulted in savings, although these savings were difficult to quantify, and the program should be reinstated. The statement of managers in the conference report on the DOD Appropriations Act for Fiscal Year 1995 provided for the reinstitution of the program, which was accomplished in 1996. Large Personnel Reductions Have Not Resulted in Commensurate Operations and Maintenance Budget Reductions Since about 40 percent of the O&M budget funds civilian salaries, DOD expects that initiatives to reduce civilian personnel will result in substantial O&M budget reductions. Between 1990 and 1997, DOD has reduced its civilian workforce by 275,000 people—or about 26 percent. Over the same period, the number of active duty service members were reduced by 29 percent and defense-related private sector employees were reduced by 34 percent. Table 1 shows personnel reductions for selected civilian occupational categories. Secretaries and depot maintenance personnel took the largest percentage reduction, at 52 percent and 48 percent, respectively, while educators were reduced by only 5 percent and scientists by 6 percent. Fire and police personnel were reduced by 17 percent and installation maintenance by 20 percent. During our review of outsourcing base support operations, various installation commanders told us that one way of achieving across-the-board personnel reductions mandated by the Office of the Secretary of Defense is to outsource, which would free remaining civilian authorizations for use in other activities. One senior command official in the Army stated that the need to reduce civilian positions is greater than the need to save money. This view was reinforced by the DOD Inspector General’s 1995 report on cost growth, which noted that “the goal of downsizing the Federal workforce is widely perceived as placing DOD in a position of having to contract for services regardless of what is more desirable and cost effective.” Despite a reduction of 275,000 civilian personnel since 1990, and plans for further substantial reduction over the next several years, our analysis of the 1997-2001 FYDP shows that infrastructure costs are expected to increase. In May 1996, we reported that the infrastructure portion of the 1997 FYDP is projected to increase about $9 billion, from $146 billion in 1997 to $155 billion in 2001 (see table 2). Despite this increase, the infrastructure costs as a proportion of the total budget, is projected to decrease slightly from about 60 percent in 1997 to about 57 percent in 2001. The decrease results primarily because DOD’s total budget is projected to increase at a faster rate than the infrastructure part of the budget. The installation support portion of DOD’s infrastructure budget is projected to decline during the 1997 to 2001 period, in part, due to savings generated from the BRAC process. However, other infrastructure categories, including acquisition infrastructure; force management; central logistics; central medical; central training; central personnel; and central command, control, and communications are projected to increase, although individual accounts within these areas, such as military construction and real property maintenance, are declining. The combination of O&M and military personnel appropriations fund about 80 percent of infrastructure activities that can be clearly identified in the FYDP. Thus, DOD must look to these appropriations if it intends to spend less for infrastructure activities. Shortfalls in Operation and Support Cost Reductions Limit Planned Procurement Fund Increases Planned reductions in DOD’s O&M costs have not achieved expected decreases in the O&M budget. For example, as illustrated in figure 1, although the fiscal year 1995 FYDP projected that O&M funding would be about $88 billion in fiscal year 1997, the 1997 FYDP estimated that O&M expenditures for fiscal year 1997 would be $1.2 billion more than projected 2 years earlier. Moreover, the most recent 1998 FYDP estimate shows that about $92.9 billion will be required to support operations funded by the O&M account during 1997—$5 billion more than projected by the 1995 FYDP. Conversely, as shown in figure 2, the procurement account had to be reduced over that same period to offset increases in O&M cost. Thus, planned DOD increases in procurement funding had to be put aside because of the realities of funding day-to-day operational support costs. For example, the 1995 FYDP projected that DOD would spend $49.8 billion for procurement in fiscal year 1997. However, DOD actually budgeted only $38.9 billion for procurement—over $10 billion less than projected 2 years earlier. Although the 1998 FYDP indicates that an additional $5.2 billion was spent for Procurement during 1997 than was budgeted the previous year, the $44.1 billion expenditure was still $5.7 less than had been projected in 1995 to be spent for procurement in 1997. Opportunities for Achieving Future Infrastructure Savings As we recently reported, despite DOD’s initiatives, it is critical for DOD to further reduce infrastructure and support costs. After 10 years of effort, billions of dollars are still being wasted annually on inefficient and unneeded activities. For fiscal year 1997, DOD estimates that about $146 billion, or almost two-thirds of its budget, will be for operation and support activities. These activities, which DOD generally refers to as support infrastructure, include research, development, and procurement of major weapon systems; buying and managing spare parts and repairing equipment; maintaining installation facilities; providing non-unit training to the force; and providing health care to military personnel and their families. Significant excess capacity exists in these activities. While there are many opportunities to reduce this excess capacity and improve the cost-effectiveness of DOD support operations, DOD faces many challenges in doing so. I will briefly highlight some of the key areas where costly excess capacity and infrastructure remain and have the greatest potential for savings—particularly through reengineering and consolidating functions and activities among the services. Acquisition infrastructure, which includes activities and personnel that support the research, production, and procurement of weapon systems and other critical defense items, accounts for about $10.2 billion. We have noted problems in achieving consolidations in testing and evaluation areas and stated that DOD should consider consolidations in two areas—Air Force and Navy electronic warfare threat testing capabilities and high performance fixed-wing aircraft testing capabilities. No major consolidations or reductions have occurred. Likewise, although DOD’s laboratories and logistics centers have excess capacity of about 35 percent, prior reform initiatives have generally focused on management efficiencies rather than infrastructure reductions. Central logistics, which includes maintenance activities, the management of materials, operation of supply systems, communications, and minor construction, accounts for as much as $51 billion, including funding from the military revolving funds. We have identified long-standing problems and opportunities to reduce infrastructure costs in the key area of inventory management. While the Defense Logistics Agency has taken steps to reengineer its logistics practices and reduce consumable inventories, it could do more to achieve substantial savings. Further, although DOD has made progress in reducing the value of its secondary inventory, in part by adopting leading edge practices, including use of prime vendor delivery, our analysis of inventory valued at $67 billion showed that $41.2 billion of the inventory was not needed. About $14.6 billion of the unneeded inventory did not have projected demands and will likely not ever be used. Additionally, DOD is currently reassessing the issue of streamlining and consolidating the management of the inventory control points, which are responsible for material management. We have also reported that BRAC recommendations for depot maintenance closures during the 1995 round did little to eliminate excess capacity and that excess capacity in the depot system remains at about 50 percent, with the Air Force and the Army having the greatest problems in this area. Installation support, which includes personnel and activities that fund, equip, and maintain facilities from which defense forces operate, will consume about $30 billion, or about 17 percent of the projected fiscal year 1997 infrastructure expenditures. The central issue is that after four BRAC rounds, the services have reduced their facilities infrastructure at a much smaller rate than their force structure. Despite the recognized potential to reduce base operating support costs through greater reliance on interservice-type arrangements, the services have not taken sufficient advantage of available opportunities. Differing service traditions and cultures and concern over losing direct control of support assets have often caused commanders to resist interservicing. Additionally, by having too much infrastructure to support, available military construction and repair dollars are thinly spread. Central training infrastructure, which includes basic training for new personnel, aviation and flight training, military academies, officer training corps, other college commissioning programs, and officer and enlisted training schools, will account for about $19 billion, or 13 percent of projected 1997 infrastructure expenditures. We have identified several training-related installations with relatively low military value that were not proposed for closure, despite the long-term savings potential. We have also pointed out interservicing opportunities that remain. Central medical, which includes personnel and funding for medical care provided to military personnel, dependents, and retirees, will account for about $16 billion of the projected 1997 infrastructure expenditures. Activities include medical training, management of the military health care system, and support of medical installations. Each of the military departments operates its own health care system, even though these systems have many of the same administrative, management, and operational functions. Since 1949, over 22 studies have reviewed the feasibility of creating a health care entity within DOD to centralize management and administration of the three systems—most of them encouraging some form of organizational consolidation. Challenges DOD Faces as It Implements and Considers Proposals to Achieve Infrastructure Savings Over the last several years, DOD has renewed its efforts to achieve infrastructure savings through the use of the A-76 process. At the same time, studies provided by the CORM and DSB over the last 2 years have provided increasingly aggressive outsourcing proposals and predictions of significant savings. Key elements of the DOD effort and the two studies are: DOD anticipates that by 2003 it can achieve over $2 billion savings annually from outsourcing activities that involve about 130,000 civilian personnel. Further, DOD has reportedly programmed the savings into its fiscal year 1998 FYDP. The CORM report issued in May 1995 recommended that DOD outsource or privatize all current and newly established commercial-type support services. The report estimated that taking this action could save over $3 billion a year. The DSB report issued in November 1996 recommended dramatic restructuring of DOD’s support structure by maximizing the use of the private sector for almost all support functions. From this and other proposed changes such as the use of better business practices, the DSB estimated over $30 billion dollars could be saved annually from defense infrastructure accounts by the year 2002. We agree that substantial savings can be achieved by outsourcing and privatizing; consolidating similar functions to reduce excess capacity; reengineering remaining functions, processes, and organizations; more effectively using technology innovations; and through other initiatives.Our recent report on downsizing the defense infrastructure provides 13 options that could result in savings of about $11.8 billion from fiscal years 1997 to 2001. However, based on the lessons learned from past initiatives and principally our work on depot maintenance and base support operations, we are concerned that savings of the magnitude projected by DOD, the CORM, and the DSB may not be achievable. Last, as noted by DOD, the CORM, and DSB, a number of legislative requirements restrict or affect implementation of these proposals. The extent to which these requirements change or remain the same will also affect savings estimates. Outsourcing Savings Expectations May Not Be Achievable in the Magnitude Projected In 1993, the National Performance Review endorsed outsourcing, noting that DOD should implement a comprehensive program for outsourcing non-core functions. According to its report, Creating a Government That Works Better & Costs Less, DOD had identified 50 broad area candidates for outsourcing, such as base operations support, housing, health services, maintenance and repair, training, labs, security, and transportation. The report noted that outsourcing should take place when it makes economic and operational sense, based on accomplishing the following steps: clearly describe the function in objective terms of what gets done and how its gets done, but not who does it; categorize the function as either core or non-core; establish detailed, specific performance requirements for each function based on the commander/manager’s mission and customer requirements; analyze legal, supplier, and performance requirements for each function to determine the source that best balances economic benefits with operational risks; produce a detailed performance agreement and associated documents for the function (e.g., performance work statement and request for proposals if the function is to be outsourced); and introduce cost competition. We recently reported that DOD is significantly increasing its emphasis on outsourcing base operation support and other activities through the A-76 process. Our work shows that while opportunities for savings exist, it is questionable whether they will be in the magnitude currently being projected. From October 1995 to January 1997, the services announced plans to begin studies during fiscal years 1996 and 1997 that involve over 34,000 positions, most of which were associated with base support activities. Further studies involving an additional 100,000 positions will be started over the next 6 years. We recognized that outsourcing is cost-effective because the competitions generate savings—usually through a reduction in personnel—whether the competition is won by the government or the private sector. However, we questioned some of the services’ savings projections. After a somewhat slow start in beginning to do A-76 cost studies after reinstitution of the program within DOD, the services are now expanding the A-76 program. One of the reasons is that the services had lost much of the expertise required to define the requirements and conduct the cost evaluations during recent personnel downsizing. The Air Force, which has led the way in initiating new studies, plans to study up to 60,000 positions for potential outsourcing from 1998 through 2003—the majority of which are in base support services. The Army plans to study about 11,000 mostly civilian positions in fiscal year 1997 and another 5,000 from fiscal year 1998 to 2003. The Navy plans to begin studies of about 80,000 positions for potential outsourcing over the next several years—about 50,000 civilians and 30,000 military. Although the Marine Corps estimates it will study about 5,000 positions, it does not have a firm timetable for initiating or completing these studies. The Air Force projects a 20-percent cost savings of up to $1.26 billion initially from outsourcing mostly base support functions between 1998 and 2003. The Army projected a 10-percent savings but recently increased its savings projection to 20 percent. The Marine Corps projects initial savings of about $10 million per year beginning in 1998, increasing to $110 million per year by fiscal year 2004. The Navy projects a 30-percent net cost savings. As previously discussed, we have concerns about whether the 20- to 30-percent savings assumed by the services will be achieved. The savings projections are based on unverified projections rather than on actual A-76 savings, and where audited, the estimated savings did not achieve the projections, even though the costs of the competitions were not taken into consideration. Additionally, we recently found that many installation officials expressed concern that personnel downsizing had already eliminated much of the potential for outsourcing to achieve additional personnel savings. Also, potential outsourcing savings may be minimized by increases in the scope of work done under outsourcing. Such increases can occur, when funding becomes available to restore a level of service that had been previously reduced due to resource constraints, such as maintenance and repair activities. Savings in the Magnitude Projected by the CORM Are Questionable In its report, Directions for Defense (May 24,1995), the CORM recommended that DOD outsource all current and newly established commercial-type support services. According to the report, outsourcing candidates should range from routine commercial support services widely available in the private sector to highly specialized support of military weapons. For example, janitorial companies might perform facilities maintenance, replacing government custodians; and commercial software engineering firms might upgrade computer programs for sophisticated aircraft electronic countermeasures equipment, replacing government software specialists. CORM’s report recommended the following actions: Outsource new support requirements, particularly the depot-level logistics support of new and future weapon systems. OMB withdraw Circular A-76; the Congress repeal or amend legislative restrictions; and DOD extend to all commercial-type activities a policy of avoiding public-private competition where adequate private sector competition exists. Move to a time-phased plan to privatize essentially all existing depot-level maintenance. Outsource selected material management activities. Increase access to and require enrollment in private sector medical care and require users of DOD care to enroll and DOD to set a fee structure and institute a medical allowance for active duty service members’ families. Outsource family housing, finance and accounting, data center operations, education and training, and base infrastructure. The report stated that its recommendations for greater use of private market competition would lower DOD’s support costs and improve performance—noting that a 20-percent savings from outsourcing DOD’s commercial-type workload would free over $3 billion per year for higher defense needs. DOD’s Response In response to the CORM report, in 1995 the Deputy Secretary of Defense established integrated policy teams to explore outsourcing opportunities for base support, depot maintenance, material management, education and training, finance and accounting, data processing, and family housing. The teams were expected to identify potential outsourcing candidates, analyze obstacles to outsourcing, and develop solutions to facilitate implementation. In addition, the Secretary established cross-functional teams to recommend changes to OMB Circular A-76 and various legislative provisions that could otherwise delay or impede implementation of newly identified outsourcing initiatives. The material management team developed business case analyses for outsourcing parts of the Defense Reutilization and Marketing Service and Defense Logistics Agency storage and warehousing operations. The team also identified the Defense Logistics Agency and military inventory control point cataloging function for possible outsourcing. The finance and accounting team identified several potential outsourcing candidates, including claims management, defense commissary bill paying, nonappropriated fund accounting, and payroll. One of the cross-functional teams developed a proposed legislative package that called for the Congress to rescind all legislative and administrative provisions constraining outsourcing. This proposal was not adopted. While some outsourcing team meetings are still being held, this effort could be consumed by DOD’s current quadrennial defense review, which is likely to identify outsourcing as one of its key initiatives. Savings Projections Are Not Well Supported While recognizing the potential savings from outsourcing, we question the savings projections cited by the CORM. As we reported in July 1996, the CORM’s data did not support its depot privatization savings assumption.Half of the competitions were won by the public sector. Further, assumptions were based primarily on reported savings from public-private competitions for commercial activities under OMB Circular A-76. Many private sector firms generally made offers for this work due to the highly competitive nature of the private sector market for these activities, and estimated savings were generally greater if there were a large number of competitors. We also noted that our work and defense audit agencies have reported that projected savings were often not achieved or were less than expected due to cost growth and other factors. Further, the savings resulted from competition rather than from privatization. Finally, we also noted that outsourcing in the absence of a highly competitive market, would not likely achieve expected savings and could increase the cost of depot maintenance operations. Further, our data shows that outsourcing risks are higher when privatizing unique, highly diverse, and complex work where requirements are difficult to define, large capital investments are required, extensive technical data is involved, and highly skilled and trained personnel are required. CORM assumed that meaningful competition would be generated for most of the work it recommended be privatized. Yet, for depot maintenance, 76 percent of the 240 depot maintenance contracts we reviewed were awarded on a sole-source basis. DSB Projected $30 Billion Savings Are Questionable The final report of the DSB in November 1996, Achieving an Innovative Support Structure for 21st Century Military Superiority, recommended a dramatic restructuring of DOD’s support structure by maximizing the use of the private sector for almost all support functions. The DSB provided a new vision where DOD would only provide warfighting, direct battlefield support, policy and decision-making, and oversight. All other activities would be done by the private sector—using best practices for achieving better, faster, lower-cost results. The following are examples of changes that might result from DSB’s new vision: Use the private sector for logistics and maintenance in the continental United States. DOD would get out of the repair and inventory management business. Expand contractor logistics support—the life-cycle support of weapon systems often by the original equipment manufacturers. The report noted that relief from legislative constraints would be required, but that most contractor logistic support could be done without legislative changes. DOD would make an investment of $300 million to $500 million for reliability improvements. Privatize-in-place testing and evaluation facilities. Outsource automatic data processing business to four to six contractor-owned facilities. Outsource most DOD finance and accounting service functions. Use the private sector to manage DOD housing. Raise allowances and use contractors to build and manage housing where no markets exist. Privatize DOD commissaries. Privatize all remaining special skills training. Privatize all remaining base operating support functions. DSB’s report also advocated revoking OMB Circular A-76. In the meantime, the report suggested avoiding it by recategorizing and getting out of the business of performing various functions, an approach the report noted the Defense Logistics Agency had successfully employed when it transferred the pharmaceutical warehousing and distribution functions to vendors. According to the report, if DOD implemented the specific recommendations of DSB, over $30 billion could be reduced annually from defense infrastructure accounts by the year 2002. The report used a baseline of $140 billion for current annual DOD infrastructure costs. These savings were dependent on a reduction of about 5 percent per year in the civilian workforce and about 2 percent per year of military personnel over the next 5 years. The report also recommended a series of base realignment and closure reviews. DSB Savings Assumptions Are Questionable We have not yet completed our analysis of the DSB report. In general, we agree that there are great opportunities for savings in many of the areas they address. Also, we share their concern that too much excess capacity remains in many of DOD’s infrastructure activities. We also believe that outsourcing and the use of leading edge business practices, such as direct vendor delivery for inventory, represent opportunities to reduce costs. However, the outsourcing savings projected by this study were based on essentially the same assumptions as those used by the CORM, although the DSB study expanded the functions and activities that it recommended for outsourcing and claimed savings of up to 40 percent from privatization. Our analysis indicates that savings projections of $30 billion are not likely to be achieved. These savings assumptions were not supported and were based on favorable conditions that may not currently exist for a number of activities recommended for outsourcing. We have found that outsourcing savings are dependent on or highly influenced by (1) the continual existence of a competitive commercial market; (2) the ability to clearly define the tasks to be done and measure performance; (3) the assumption that the private sector can do the required work more cost-effectively than a reengineered DOD activity; (4) the extent that commercial contracting and contract management practices can be applied to the outsourced activity; (5) the relative cost-effectiveness of the public activity being outsourced; and (6) the ability to reduce the existing public infrastructure and personnel costs associated with the outsourced activity. Further, the DSB savings projections may also include functions and activities that are determined to be core. Another area of concern is that the DSB referred often to outsourcing competitively, yet it recommended total contractor logistics support for weapon systems—often a sole-source arrangement with the original equipment manufacturer—which could include maintenance, supply, systems management, and other functions for the life of the systems. We question whether this is the appropriate model for most weapon systems and are concerned about its potential for cost growth and long-term impact on core capabilities for several reasons: First, while DOD managers have found contractor logistics support to be cost-effective for commercially derived systems with established competitive repair sources—these conditions are not present for military unique systems and cutting edge technologies. Second, our past work demonstrates that most depot work is sole-sourced to the original equipment manufacturer, raising cost and future competition concerns. DOD managers told us that steadily escalating prices are typical of sole-source contractor logistics support contracts. Third, privatizing total support on new and future weapon systems can make it difficult for the organic depots to acquire and sustain technical competence on new systems, leading edge technologies, and critical repair processes necessary to maintain future core capabilities, provide a credible competitive repair source, and be a smart buyer for those logistics activities that will be contracted out. When competition—whether from the private sector or an organic depot—is introduced, prices decline. For example, the Air Force identified $350 million in savings when it was able to recompete contractor logistics support contracts and move the maintenance work from the manufacturer to other commercial firms, including $50 million in savings on the KC-10 by awarding a competitive contract to the firm previously subcontracted to the prime, thus cutting out the “middle-man”. The Air Force is also achieving significant savings as a result of interservicing the F404 engine to a Navy depot rather than continuing to contract on a sole-source basis with the original equipment manufacturer as the Air Force did for many years. Finally, while the DSB report seems to assume that outsourcing is the most cost-effective option for all DOD support operations, this may not be the case. The reengineered public entity was determined to be the most cost-effective for the over 2,100 competitions conducted using A-76 procedures. Further, our review of public-private depot maintenance competitions indicated that the private sector frequently offered the best value. Effects of Legislation on Outsourcing A number of legislation provisions may limit outsourcing. For example, section 2464 of title 10 provides that DOD activities should maintain a logistics capability (personnel, equipment, and facilities) sufficient to ensure technical competence and resources necessary for an effective and timely response to a mobilization or other national defense emergency. It also requires the Secretary of Defense to define logistics activities that are necessary to maintain the logistics capability. Those activities may not be contracted out without a waiver by the Secretary. DOD has proposed a risk assessment process to be used in identifying core depot maintenance requirements and each of the services is in the process of determining its core requirements using the procedures each has implemented to do so. DOD has not defined a core process or identified core requirements for other logistics functions. It is not clear how that process will be conducted as DOD increases outsourcing using OMB Circular A-76 and other procedures. Other statutes affecting the privatization of depot maintenance are discussed in our March 1996 report addressing opportunities for privatizing repair of military engines. Also, 10 U.S.C. 2461 requires A-76 cost comparisons, congressional notification of studies involving more than 45 civilians, and annual reports to the Congress on outsourcing. Section 2465 of title 10 prohibits DOD from outsourcing civilian firefighters or security guards at military installations. DOD’s fiscal year 1996 inventory of civilian and military personnel performing commercial activities shows that about 9,600 firefighters and 16,000 security guards are exempt from outsourcing because of this law and other considerations, such as mobility requirements. Outsourcing is permitted only if the positions were outsourced before September 24, 1983. We plan to assess this issue more thoroughly by comparing the cost of in-house positions in selected instances where such services have been outsourced. Conclusions In conclusion, we agree with DOD that its infrastructure costs can and should be substantially reduced, and we believe that DOD should identify key functions and activities where it should focus to identify requirements—including core—and begin to reengineer those activities and functional areas that appear to offer the best opportunities for savings. Outsourcing, when used correctly, has been an effective tool in achieving cost reductions. Our work advocates expanded reliance on the private sector where that is the most cost-effective solution. However, evaluations must be made on an individual basis, taking into consideration the costs and benefits of each potential outsourcing opportunity. DOD already has programs to identify potential infrastructure reductions in many areas. However, breaking down cultural resistance to change, overcoming service parochialism, making decisions to eliminate cross functional stovepipes, and setting forth a clear framework for a reduced defense infrastructure are key to avoiding waste and inefficiency and generating the maximum savings from DOD’s infrastructure accounts. To do this, the Secretary of Defense and the service secretaries need to give greater structure to their efforts by developing an overall strategic plan that establishes time frames and identifies organizations and personnel responsible for accomplishing fiscal and operational goals. DOD needs to present this plan to the Congress in much the same way that it presented its plans for force structure reductions in the Base Force Plan and bottom-up review. The Congress can then oversee the plan and allow the affected parties to see what is going to happen, and when. In developing the plan, DOD should consider using a variety of means to achieve reductions, including consolidations, reengineering, interservicing agreements, and outsourcing—with appropriate personnel reductions implemented to take advantage of the efficiencies generated by these initiatives. It should also consider the need and timing for future BRAC rounds, as suggested by the 1995 BRAC Commission and other groups. Mr. Chairman, this concludes my prepared remarks. I would be pleased to answer questions at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Defense's (DOD) goal to save billions of dollars by outsourcing work to the private sector and through other initiatives for activities which DOD generally refers to as its support infrastructure, focusing on: (1) DOD's past experience in achieving infrastructure savings; (2) key infrastructure areas that offer the greatest potential for savings; and (3) challenges DOD faces in reaching goals to reduce infrastructure in the future. GAO noted that: (1) GAO agrees with DOD and others that significant opportunities exist to reduce DOD's infrastructure and support costs; (2) however, GAO questions whether the magnitude of savings anticipated by DOD and others is attainable within the current strategy and force structure; (3) GAO's past and ongoing work shows that while DOD's past savings initiatives yielded significant savings, they often fell short of the initial goal; (4) while DOD has substantially reduced its infrastructure through the base realignment and closure process and significant savings will ultimately be achieved, savings will not be as great as initially estimated or achieved as quickly as initially hoped; (5) today's future years defense plan shows that, despite these initiatives, future infrastructure costs will only slightly decline as a relative percentage of DOD's budget; (6) because of GAO's concern about the waste and inefficiencies in DOD's support structure and operations, GAO has designated DOD's infrastructure as one of 24 high-risk areas that are vulnerable to waste and mismanagement within the federal government; (7) GAO believes that DOD could reap significant savings by: (a) reducing excess capacity in its testing and evaluation areas and its laboratories and centers; (b) reducing excess capacity within DOD's depot maintenance system; (c) reducing the costs of managing its $67-billion inventory, of which almost half is beyond war reserve and operating requirements; (d) reducing installation support costs; and (e) reducing training costs; (8) new ideas about reducing infrastructure costs have recently been proposed to DOD that focus largely on outsourcing and privatization to achieve savings; (9) GAO's analysis of such proposals shows that there is reason for caution about whether the magnitude of hoped for savings can be achieved; (10) there are also various legislative requirements that will restrict and otherwise affect DOD's ability to implement some proposed initiatives; (11) GAO thinks that DOD's effort to reduce costs and achieve savings is extremely important and encourages DOD to move forward as quickly as possible; (12) breaking down cultural resistance to change, overcoming service parochialism, and setting forth a clear framework for a reduced defense initiative are key to effectively implementing savings; and (13) DOD and the services need to give greater structure to their efforts by developing an overall strategic plan, which would provide a basis for the Congress to oversee DOD's plan and allow affected parties to see what is going to happen and when.
Background Regardless of a veteran’s employment status or level of earnings, VA’s disability compensation program pays monthly cash benefits to eligible veterans who have service-connected disabilities resulting from injuries or diseases incurred or aggravated while on active military duty. A veteran starts the disability claims process by submitting a claim to one of the 57 regional offices administered by the Veterans Benefits Administration (VBA). In the average compensation claim, the veteran claims about five disabilities for which the regional office must develop the evidence required by law and federal regulations, such as military records and medical evidence. To obtain the required medical evidence, VBA’s regional offices often arrange medical examinations for claimants. For example, in fiscal year 2004, VBA’s 57 regional offices asked the 157 medical centers administered by the Veterans Health Administration (VHA) to examine about 500,000 claimants and provide examination reports containing the medical information needed to decide the claim. On the basis of the evidence developed by the regional office, an adjudicator determines whether each disability claimed by the veteran is connected to the veteran’s military service. Then, by applying medical criteria contained in VA’s Rating Schedule, the adjudicator evaluates the degree of disability caused by each service-connected disability in order to determine the veteran’s overall degree of service-connected disability. The degree of disability is expressed as a percentage, in increments of 10 percentage points—for example, 10 percent, 20 percent, 30 percent, and so on, up to 100 percent disability. The higher the percentage of disability, the higher the benefit payment received by the veteran. If a veteran disagrees with the regional office adjudicator’s decision on whether a disability is service-connected or on the appropriate percentage of disability, the veteran may file a Notice of Disagreement. The regional office then provides a further written explanation of the decision, and if the veteran still disagrees, the veteran may appeal to VA’s Board of Veterans’ Appeals. Before appealing to the board, a veteran may ask for a review by a regional office Decision Review Officer, who is authorized to grant the contested benefits based on the same case record that the original adjudicator relied on to make the initial decision. After appealing to the board, if a veteran disagrees with the board’s decision, the veteran may appeal to the U.S. Court of Appeals for Veterans Claims, which has the authority to render decisions establishing criteria that are binding on future decisions made by VA’s regional offices as well the board. For example, in DeLuca v. Brown, 8 Vet. App. 202 (1995), the court held that when federal regulations define joint and spine impairment severity in terms of limits on range of motion, VA claims adjudicators must consider whether range of motion is further limited by factors such as pain and fatigue during “flare-ups” or following repetitive use of the impaired joint or spine. Previous to this decision, VA had not explicitly considered whether such additional limitations existed because VA contended that its Rating Schedule incorporated such considerations. VA Needs a System for Routinely Monitoring Variations Inherent in Deciding Disability Claims Because adjudicators often must use judgment when deciding disability compensation claims, variations in decision making are an inherent possibility. While some claims are relatively straightforward, many require judgment, particularly when the adjudicator must evaluate (1) the credibility of different sources of evidence; (2) how much weight to assign different sources of evidence; or (3) disabilities, such as mental disorders, for which the disability standards are not entirely objective and require the use of professional judgment. Without measuring the effect of judgment on decisions, VA cannot provide reasonable assurance that consistency is acceptable. At the same time, it would be unreasonable to expect that no decision-making variations would occur. Consider, for example, a disability claim that has two conflicting medical opinions, one provided by a medical specialist who reviewed the claim file but did not examine the veteran, and a second opinion provided by a medical generalist who reviewed the file and examined the veteran. One adjudicator could assign more weight to the specialist’s opinion, while another could assign more weight to the opinion of the generalist who examined the veteran. Depending on which medical opinion is given more weight, one adjudicator could grant the claim and the other could deny it. Yet a third adjudicator might conclude that the competing evidence provided an approximate balance between the evidence for and the evidence against the veteran’s claim, which would require that the adjudicator apply VA’s “benefit-of-the-doubt” rule and decide in favor of the veteran. An example involving mental disorders also demonstrates how adjudicators sometimes must make judgments about the degree of severity of a disability. The disability criteria in VA’s Rating Schedule provide a formula for rating the severity of a veteran’s occupational and social impairment due to a variety of mental disorders. This formula is a nonquantitative, behaviorally oriented framework for guiding adjudicators in choosing which of the degrees of severity shown in table 1 best describes the claimant’s occupational and social impairment. Similarly, VA does not have objective criteria for rating the degree to which certain spinal impairments limit a claimant’s motion. Instead, the adjudicator must assess the evidence and decide whether the limitation of motion is “slight, moderate, or severe.” To assess the severity of incomplete paralysis, the adjudicator must decide whether the veteran’s paralysis is “mild, moderate, or severe.” The decision on which severity classification to assign to a claimant’s condition could vary in the minds of different adjudicators, depending on how they weigh the evidence and how they interpret the meaning of the different severity classifications. Despite the inherent variation, however, it is reasonable to expect the extent of variation to be confined within a range that knowledgeable professionals could agree is reasonable, recognizing that disability criteria are more objective for some disabilities than for others. For example, if two adjudicators were to review the same claim file for a veteran who has suffered the anatomical loss of both hands, VA’s disability criteria state unequivocally that the veteran is to be given a 100 percent disability rating. Therefore, no variation would be expected. However, if two adjudicators were to review the same claim file for a veteran with a mental disability, knowledgeable professionals might agree that it would not be out of the bounds of reasonableness for these adjudicators to diverge by 30 percentage points but that wider divergences would be outside the bounds of reasonableness. The fact that two adjudicators might make differing, but reasonable, judgments on the meaning of the same evidence is recognized in the design of the system that VBA uses to assess the accuracy of disability decisions made by regional office adjudicators. VBA instructs the staff who review the accuracy of decisions to refrain from charging the original adjudicator with an error merely because they would have made a different decision than the one made by the original adjudicator. VBA instructs the reviewers not to substitute their own judgment in place of the original adjudicator’s judgment as long as the original adjudicator’s decision is adequately supported and reasonable. Because of the inherent possibility that different adjudicators could make differing decisions based on the same information pertaining to a specific impairment, we recommended in November 2004 that the Secretary of Veterans Affairs develop a plan containing a detailed description of how VA would (1) use data from a newly implemented administrative information system—known as Rating Board Automation 2000—to identify indications of decision-making inconsistencies among the regional offices for specific impairments and (2) conduct systematic studies of the impairments for which the data reveal possible inconsistencies among regional offices. VA concurred with our recommendation but has not yet developed such a plan. At this point, VA has now collected 1 full year of data using the new administrative data system, which should be sufficient to begin identifying variations and then assessing whether such variations are within the bounds of reasonableness. Inconsistent Quality Of Disability Examination Reports Underscores Need to Monitor Consistency of Decisions Because the existing medical records of disability claimants often do not provide VBA regional offices with sufficient evidence to decide claims properly, the regional offices often ask VHA medical centers to examine the claimants and provide exam reports containing the medical information needed to make a decision. Exams for joint and spine impairments are among the exams that regional offices most frequently request. To comply with the DeLuca decision’s requirements for joint and spine disability exam reports, VHA instructs its medical center clinicians to make not only an initial measurement of the range of motion in the impaired joint or spine but also to measure range of motion after having the claimant flex the impaired joint or spine several times. This is done to determine the extent to which repeated motion may result in pain or fatigue that further degrades the functioning of the impaired joint or spine. In addition, the clinician also is instructed to determine if the claimant experiences flare-ups from time to time, and if so, how often such flare- ups occur and the extent to which they limit the functioning of the impaired joint or spine. However, in a baseline study conducted in 2002, VA found that 61 percent of the exam reports on joint and spine impairments did not provide sufficient information on the effects of repetitive movement or flare-ups to comply with the DeLuca criteria. We reported earlier this month on the progress VA had made since 2002 in ensuring that its medical centers consistently prepare joint and spine exam reports containing the information required by DeLuca. We found that, as of May 2005, the percentage of joint and spine exam reports not meeting the DeLuca criteria had declined substantially from 61 percent to 22 percent. Much of this progress appeared attributable to a performance measure for exam report quality established by VHA in fiscal year 2004 after both VHA and VBA had taken a number of steps to build a foundation for improvement. This included creating the Compensation and Pension Examination Project Office, a national office established in 2001 to improve the disability exam process, and providing extensive training to VHA and VBA personnel. While VA made substantial progress in ensuring that its medical centers’ exam reports adequately address the DeLuca criteria, a 22 percent deficiency rate indicated that many joint and spine exam reports still did not comply with DeLuca. Moreover, in relation to the issue of consistency, the percentage of exam reports satisfying the DeLuca criteria varied widely across the 21 health care networks that manage VHA’s 157 medical centers—from a low of 57 percent compliance to a high of 92 percent. It should be noted that the degree of variation is likely even greater than indicated by these percentages because, within any given health care network, an individual medical center’s performance in meeting the DeLuca criteria may be lower or higher than the combined average performance for all the medical centers in that specific network. Therefore, in the network that had 57 percent of its joint and spine exams meeting DeLuca criteria, an individual medical center within that network may have had less than 57 percent meeting the DeLuca criteria. Conversely, in the network that had 92 percent of the exams meeting the DeLuca criteria, an individual medical center within that network may have had more than 92 percent satisfying DeLuca. Unless medical centers across the nation consistently provide the information required by DeLuca, veterans claiming joint and spine impairments may not receive consistent disability decisions. Further, VA has found deficiencies in a substantial portion of the requests that VBA’s regional offices send to VHA’s medical centers, asking them to perform disability exams. For example, VA found in early 2005 that nearly one-third of the regional office requests for spine exams contained errors such as not identifying the pertinent medical condition or not requesting the appropriate exam. However, VBA had not yet established a performance measure for the quality of the exam requests that regional offices submit to medical centers. To help ensure continued progress in satisfying the DeLuca criteria, we recommended that the Secretary of Veterans Affairs direct the Under Secretary for Health to develop a strategy for improving consistency among VHA’s health care networks in meeting the DeLuca criteria. For example, if performance in satisfying the DeLuca criteria continues to vary widely among the networks during fiscal year 2006, VHA may want to consider establishing a new performance measure specifically for joint and spine exams or requiring that medical centers use automated templates developed for joint and spine exams, provided an in-progress study of the costs and benefits of the automated exam templates supports their use. We also recommended that the Secretary direct the Under Secretary for Benefits to develop a performance measure for the quality of exam requests that regional offices send to medical centers. Conclusions As a national program, VA’s disability compensation program must ensure that veterans receive fair and equitable decisions on their disability claims no matter where they live across the nation. Given the inherent risk of variation in disability decisions, it is incumbent on VA to ensure program integrity by having a credible system for identifying indications of inconsistency among its regional offices and then remedying any inconsistencies found to be unreasonable. Until assessments of consistency become a routine part of VA’s oversight of decisions made by its regional offices, veterans may not consistently get the benefits they deserve for disabilities connected to their military service, and taxpayers may not trust the effectiveness and fairness of the disability compensation program. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions you or the members of the subcommittee may have. Contact and Acknowledgments For further information, please contact Cynthia A. Bascetta at (202) 512- 7101. Also contributing to this statement were Irene Chu and Ira Spears. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The House Subcommittee on Disability Assistance and Memorial Affairs asked GAO to discuss its work on the consistency of disability compensation claims decisions of the Department of Veterans Affairs (VA). GAO has reported wide state-to-state variations in average compensation payments per disabled veteran, raising questions about decisional consistency. In 2003, GAO designated VA's disability programs, along with other federal disability programs, as high risk, in part because of concerns about decisional consistency. Illustrating this issue, GAO reported that inadequate information from VA medical centers on joint and spine impairments contributed to inconsistent regional office disability decisions. GAO's November 2004 report explained that adjudicators in the Department of Veterans Affairs often must use judgment in making disability compensation claims decisions. As a result, it is crucial for VA to have a system for routinely identifying the effect of judgment on decisional variations among its 57 regional offices to determine if the variations are reasonable and, if not, how to correct them. In 2002, GAO reported that state-to-state variations of as much as 63 percent in average compensation payments per disabled veteran indicated potential inconsistency. The nature of the criteria that adjudicators must apply in evaluating the degree of impairment due to mental disorders provides an example of the extent of judgment required. GAO's October 2005 report on decisions for joint and spine disabilities showed one important way to improve consistency. Specifically, regional offices often rely on VA's 157 medical centers to examine claimants and provide medical information needed to decide the claims. However, VA has found inconsistency among its medical centers in the adequacy of their joint and spine disability exam reports that regional offices need to decide these claims. As of May 2005, the percentage of exam reports containing the required information varied across the medical centers from a low of 57 percent to a high of 92 percent. This could adversely affect the consistency of disability claims decisions involving joint and spine impairments. Although VA has made substantial progress, more remains to be done to improve the level of consistency in the disability exam reports.
Background VHA’s health care mission is broad in that it provides veterans with a wide range of health care services. These services include primary care and surgery and unique specialized care, such as treatment for post‐traumatic stress disorder, traumatic brain injury, and readjustment counseling. VHA is also a leader in medical research and the largest provider of health care training in the United States. As such, each medical center hires employees in a wide range of clinical and administrative professions, from nurses and physicians to hospital administrators, police, and housekeepers. These employees are covered by three types of personnel systems: Title 5 of the U.S. Code (Title 5): The majority of federal employees across the government are hired under the authority of Title 5; at VHA, employees under this personnel system hold positions such as police officers, accountants, and HR management. Title 38 of the U.S. Code (Title 38): VA’s separate personnel system for appointing medical staff including physicians, dentists, and registered nurses. These appointments are made based on an individual’s qualifications and professional attainments in accordance with standards established by VA’s Secretary. Title 38-Hybrid: Employees under this personnel system hold positions such as respiratory, occupational, or physical therapists; social workers; and pharmacists. This system combines elements of both Title 5 (such as for performance appraisal, leave, and duty hours) and Title 38 (such as for appointment, advancement, and pay). Each of these personnel systems has different requirements (and flexibilities) related to recruitment and hiring, performance management, and other areas served by VHA’s HR staff. VHA’s HR functions are decentralized. Each of VHA’s VISNs has an HR office that oversees the medical center-level HR offices within its network. In general, each VA medical center has its own HR office led by an HR officer. Individual HR offices are responsible for managing employee recruitment and staffing, employee benefits, compensation, employee and labor relations, and overseeing the annual employee performance appraisal process. Medical center HR offices also provide HR services to employees at VHA’s community-based living centers, rehabilitation centers, and outpatient centers. VHA’s HR staff are classified as either an HR specialist, who manages, supervises, and delivers HR products and services; or an HR assistant, who provides administrative support to HR specialists. Attrition in Clinical Positions Driven by Voluntary Resignations and Retirements VHA Losses for the 5 Occupations with the Largest Shortages Increased from Fiscal Year 2011 through 2015 In our 2016 report on VHA clinical employee retention, we noted that in 2015 VHA had about 195,900 clinical employees in 45 types of occupations. To meet the growing demand for care, VHA implemented a number of targeted hiring initiatives, such as a mental health hiring initiative, which brought on about 5,300 staff nationwide from 2012 to 2013. Despite these hiring efforts, we and others have expressed concerns about VHA’s ability to ensure that it has the appropriate clinical workforce to meet the current and future needs of veterans, due to factors such as national shortages and increased competition for clinical employees in hard-to-fill occupations. VHA officials have expressed concern with their hiring capabilities since 2014, when a well-publicized series of events called into question the ability of veterans to gain timely access to care from VHA. Our 2016 report found that for the 5 VHA clinical occupations with the largest staffing shortages (as identified by the VA Office of Inspector General in January 2015), the number of employees that VHA lost increased each year, from about 5,900 employees in fiscal year 2011 to about 7,700 in fiscal year 2015 (the 5 occupations were physicians, registered nurses, physician assistants, psychologists, and physical therapists). This attrition accounted for about 50 percent of VHA’s total losses across all clinical occupations during this period. We found a similar trend for all clinical occupations across VHA—losses increased annually during this period. (See table 1). From fiscal year 2011 through 2015, occupation loss rates for each of the 5 shortage occupations varied annually, though most saw an overall increase in losses during this period (see figure 1). Physician assistants consistently had the highest loss rate among the 5 shortage occupations. The loss rate for physician assistants increased from 9.3 to 10.9 percent during this period. The loss rate for physical therapists decreased from fiscal year 2011 to 2012 (from 8.3 to 6.4 percent), but then increased to 8.0 percent in fiscal year 2015. In addition to our review of VHA’s 5 shortage occupations, we also identified the 10 clinical occupations within VHA with the highest loss rates as of fiscal year 2015 (they were physician assistant, medical support assistant, medical supply aide and technician, optometrist, nursing assistant, medical records technician, health technician (optometry), physician, practical nurse, and medical records administration). The loss rates for these 10 occupations also varied (ranging from 5.3 percent to 10.9 percent each year from fiscal years 2011 through 2015). We found that 2 of the 5 shortage occupations— physician assistants and physicians—were among this group of the 10 highest loss-rate occupations each year from fiscal year 2011 through 2015. Additionally, 2 other occupations—medical support assistants and nursing assistants—were also consistently among this group of the 10 highest loss-rate occupations each year during this period. The 6 remaining occupations were technical positions that were generally small in overall number, such as medical supply aides and technicians. According to VHA HR officials, employees in these occupations generally do not require specialized education or licensing; thus, they tend to be more easily replaced than those in the 5 shortage occupations. Voluntary Resignations and Retirements Were the Primary Drivers of VHA Losses, though Reasons Differed for Some Occupations According to VHA’s personnel data, voluntary resignations and retirements accounted for about 90 percent of VHA’s losses from the 5 shortage occupations annually from fiscal year 2011 through fiscal year 2015 (see figure 2). The percent of losses due to voluntary resignations from the 5 shortage occupations averaged 54 percent during this period, and retirements averaged 36 percent. However, for some occupations, voluntary resignations and retirements accounted for a smaller proportion of employee losses. For example, for physical therapists and psychologists, the resignation rate averaged about 44 percent and retirement averaged about 19 percent during the 5-year period. In these occupations, other reasons—primarily expiration of their appointments—averaged about 35 and 33 percent of losses, respectively. According to VHA officials, expirations of appointments occur when a nonpermanent, time-limited appointment ends due to the expiration of the work or the funds available for the position. For physical therapists and psychologists, the use of trainees, such as interns or post-doctoral fellows, accounted for the majority of losses due to expirations of appointments. Removals accounted for a small proportion (5 percent or less, on average) of losses in each of these 5 occupations. Voluntary resignations and retirements accounted for 84 percent of VHA’s losses from the 10 occupations with the highest loss rates annually from fiscal year 2011 through fiscal year 2015. The percentage of losses due to voluntary resignations from these 10 occupations averaged about 55 percent during this period and retirements averaged 30 percent. The following summarizes the reasons for leaving VHA cited by exit survey respondents in the 5 shortage occupations: 28 percent said opportunities to advance and 21 percent said that dissatisfaction with certain aspects of the work, such as concerns about management and obstacles to getting the work done, was the primary reason they were leaving. Other than retirement, these were the most commonly cited reasons. 71 percent said that a single event generally did not cause them to think about leaving, while 28 percent reported that it did. 65 percent were generally satisfied with their jobs over the past year, while 25 percent reported that they were not. 50 percent indicated that they were generally satisfied with the quality of senior management, while 31 percent were not. 69 percent said that their supervisors did not try to change their minds about leaving, while 30 percent reported that they did. 73 percent felt that their immediate supervisors treated them fairly at work, while 15 percent reported that they did not. 67 percent felt that they were treated with respect at work, while 19 percent reported they were not. 50 percent reported that one or more benefits would have encouraged them to stay, such as alternative or part-time schedules (25 percent) or student loan repayment or tuition assistance (12 percent), among others. VHA’s exit survey results were similar for respondents from the 10 occupations with the highest loss rates to those in the 5 shortage occupations. For example, respondents from these 10 occupations also said that advancement issues (34 percent) and dissatisfaction with certain aspects of the work (20 percent) were among their primary reasons for leaving. Additionally, the majority said that a single event generally did not cause them to think about leaving (71 percent) and about 47 percent reported that one or more benefits would have encouraged them to stay, such as an alternative or part-time schedule (22 percent) or student loan repayment or tuition assistance (12 percent), among others. Oversight Improvements Needed for Nurse Recruitment and Retention Initiatives We and others have highlighted the need for an adequate and qualified nurse workforce to provide quality and timely care to veterans. As we have previously reported, it is particularly difficult to recruit and retain nurses with advanced professional skills, knowledge, and experience, which is critical given veterans’ needs for more complex specialized services. In our 2015 report—which included staff interviews at four medical centers—we found that VHA had multiple system-wide initiatives to recruit and retain its nurse workforce, but three of the four VA medical centers in our review faced challenges offering them. VHA identified a number of key initiatives it offered to help medical centers recruit and retain nurses, which focused primarily on providing (1) education and training, and (2) financial benefits and incentives. VA medical centers generally had discretion in offering these initiatives. The four medical centers in our review varied in the number of initiatives they offered, and three of these medical centers developed local recruitment and retention initiatives in addition to those offered by VHA. While three of the four medical centers reported VHA’s initiatives improved their ability to recruit and retain nurses, they also reported challenges. The challenges included insufficient HR support for medical centers, competition with private sector medical facilities, a reduced pool of advanced training nurses in rural locations, and employee dissatisfaction. In our 2015 report we also found that VHA provided limited oversight of its key system-wide nurse recruitment and retention initiatives. Specifically, VHA conducted limited monitoring of medical centers’ compliance with its initiatives. For example, in the past, VHA conducted site visits in response to a medical center reporting difficulty with implementation of one of its initiatives and to assess compliance with program policies, but VHA stopped conducting these visits. Consistent with federal internal control standards, monitoring should be ongoing and should identify performance gaps in a policy or procedure. With limited monitoring, VHA lacks assurance that its medical centers are complying with its nurse recruitment and retention initiatives, and that any problems are identified and resolved in a timely and appropriate manner. In addition, VHA has not evaluated the training resources provided to nurse recruiters at VA medical centers or the overall effectiveness of the initiatives in meeting its nurse recruitment and retention goals, or whether any changes are needed. Consistent with federal internal control standards, measuring performance tracks progress toward program goals and objectives and provides important information to make management decisions and resolve any problems or program weaknesses. For example, we found that VHA did not know whether medical centers had sufficient training to support nurse recruitment and retention initiatives. In particular, VHA did not provide face-to-face training specifically for nurse recruiters, but regular training was available to those assigned to a HR office as part of training available to all HR staff. Representatives from a national nursing organization reported that clinical nurse recruiters at VA medical centers often feel less prepared for the position than those assigned to HR offices, but VHA has not evaluated this disparity or its effects. Without evaluations of its collective system- wide initiatives, VHA is unable to determine how effectively the initiatives are meeting VHA policies and the provisions of the Veterans Access, Choice, and Accountability Act. Nor can VHA ultimately determine whether it has an adequate and qualified nurse workforce at its medical centers that is sufficient to meet veterans’ health care needs. VA Has Exempted 108 VHA Occupations from the Hiring Freeze On January 23, 2017, the administration issued an across-the-board 90- day hiring freeze applicable to federal civilian employees in the executive branch. As of January 22, 2017, no existing vacant positions could be filled and no new positions could be created. The memorandum stated that the head of any executive department or agency may exempt from the hiring freeze positions that it deems necessary to meet national security or public safety responsibilities. In accordance with the memorandum, as of mid-March, VA has exempted 108 VHA occupations from the freeze because they were necessary to meet VA’s public safety responsibilities. They included the 5 shortage occupations noted earlier (physician, registered nurse, physician assistant, psychologist, and physical therapist), as well as, for example, pharmacist, medical records technician, chaplain, and security guard. VHA Needs to Strengthen Its HR Capacity to Better Serve Veterans The recruitment and retention challenges VHA is experiencing with its clinical workforce are due, in part, to VHA’s limited HR capacity, including (1) attrition among its HR employees and unmet staffing targets, and (2) weak HR-related internal control functions. Until VHA strengthens its HR capacity, it will not be positioned to effectively support its mission. Attrition of VHA’s HR Staff and Unmet Staffing Targets Undermine VHA’s HR Capacity In our December 2016 report on VHA’s HR capacity, we found that attrition of HR staff grew from 7.8 percent (312 employees) at the end of fiscal year 2013 to 12.1 percent (536 employees) at the end of fiscal year 2015. In comparison, attrition for all VHA employees was generally consistent during the same period, from 8.4 percent in fiscal year 2013 to 9 percent at the end of fiscal year 2015 (see figure 3). Most of the turnover is due to transfers to other federal agencies, followed by resignations and voluntary retirement. In fiscal year 2015 HR specialists transferred to other federal agencies at a rate six times higher than all VHA employees. We found that between fiscal years 2011 and 2015, the majority of medical centers fell short of VHA’s HR staffing goals, even with new hires to partially offset annual attrition (see figure 4). VHA established a target HR staffing ratio of 1 HR staff to 60 VHA employees to manage consistent, accurate, and timely delivery of HR services. However, in fiscal year 2015 about 83 percent (116 of 139) of medical centers did not meet this target. Of these 116 medical centers, about half had a staffing ratio of 1 HR staff to 80 VHA employees or worse. In other words, each HR employee at those medical centers was serving 20 to 80 more employees than recommended by VHA’s target staffing ratio. According to the HR staff we interviewed, this has reduced HR employees’ ability to keep pace with work demands and has led to such issues as delays in the hiring process, problems with addressing important clinical hiring initiatives, and an increased risk of personnel processing and coding errors. In addition, VHA’s All Employee Survey results from 2015 indicate that HR staff reported feeling more burned out and less satisfied with their amount of work compared to the VHA-wide average in these areas. Specifically, about 48.1 percent of those who identified as HR specialists reported being satisfied with the amount of work compared to about 62.5 percent of employees VHA-wide. As noted above, as of mid-March 2017, VA has exempted 108 occupations from the current hiring freeze because VHA maintained they were necessary to meet VA’s public safety responsibilities. However, the broad list of exemptions, ranging from physicians to housekeeping staff, did not include HR specialists, even though VHA ranked HR management as third on a list of mission critical occupations in its 2016 Workforce and Succession Strategic Plan. Given the attrition rate that we identified among HR specialists and the HR staffing shortfalls at many VA medical centers, a prolonged hiring freeze could further erode VHA’s capacity to provide needed HR functions. In our 1982 report on hiring freezes under prior administrations, we concluded that government-wide freezes are not an effective means of controlling federal employment because they ignored individual agencies’ missions, workload, and staffing requirements and could thus disrupt agency operations. We noted that improved workforce planning, rather than arbitrary across-the-board hiring freezes, is a more effective way to ensure that the level of personnel resources is consistent with program requirements. Weak Internal Control Practices Adversely Affect Key HR Functions In our December 2016 report, we noted that weaknesses in HR-related internal control functions reduce VHA’s ability to deliver HR services. Federal standards for internal controls require agencies to (1) establish an organizational structure that includes appropriate lines of accountability and authority, (2) evaluate the competencies of HR staff and ensure they have been appropriately trained to do their jobs, and (3) design information systems to meet operational needs and use valid and reliable data to support the agency’s mission. We found shortfalls in each of these practices at VHA. Moreover, as shown in figure 5, the twin challenges of weak internal controls and limited HR capacity have had a compounding effect, creating an environment that undermines VHA’s HR operations and impedes its ability to improve delivery of health care services to veterans. We reported that key areas for improvement include the following: Strengthen oversight of HR offices. VHA is structured so that the central HR offices at VA and VHA have inadequate oversight of medical center HR offices in order to hold them accountable. This lack of oversight contributes to issues with VHA’s capacity to provide HR functions and limits VHA’s ability to monitor HR improvement efforts and ensure that HR offices apply policies consistently. Our Standards for Internal Control requires an agency’s organizational structure to provide a framework for planning, directing, and controlling operations to achieve agency objectives. VA and VHA’s central HR offices are primarily responsible for developing HR policy, guidance, and training, while VISN and medical center HR offices are responsible for implementing HR policies and managing daily HR operations. However, as shown in figure 6, there is not a direct line of authority between the VISN and medical center HR offices and the central HR offices in VA and VHA. According to the director of VA’s Office of Oversight and Effectiveness, the department’s organizational structure enables medical center directors to effectively respond to the needs of veterans and other clients using available resources. However, VA and VHA HR officials with whom we spoke said that the organizational structure limits the department’s ability to oversee individual HR offices, improve hiring processes, train HR staff, and implement consistent classification processes. Identify and address critical competency gaps. Federal standards for internal control require an agency to ensure that its workforce is competent to carry out assigned responsibilities in order to achieve the agency’s mission. Additionally, our prior work has identified principles for human capital planning that recommend an agency identify skills gaps within its workforce, implement strategies to address these gaps, and monitor its progress. However, VA and VHA’s model for assessing the competencies of HR staff is incomplete and fragmented. As one example, VHA’s internal human capital reviews have consistently found that HR staff competencies are not being assessed and HR staff lack the necessary skills to deliver high-quality services. Further, although both VA and VHA provide a variety of training programs, HR staff with whom we spoke described barriers to completing them, including a lack of time to take training and train new hires, limited course offerings, and lengthy waiting lists for courses. Address long-standing information technology challenges. To have an effective internal control system, agencies should design their information systems to obtain and process information to meet operational needs. Likewise, our prior work on strategic human capital management notes that high-performing organizations leverage modern technology to automate and streamline personnel processes to meet customer needs. Data that are valid and reliable are critical to assessing an agency’s workforce requirements. However, VA faces long-standing, significant information technology (IT) challenges that include outdated, inefficient IT systems and fragmented systems that are not interoperable. With respect to HR IT systems, in May 2016 we reported that VA’s department-wide HR system, Personnel and Accounting Integrated Data (PAID), is one of the federal government’s oldest IT systems and that VA is in the process of replacing it. As part of efforts to replace PAID, VA is developing and implementing an enterprise-wide, modern web-based system called HR Smart. VA officials told us that HR Smart will be implemented in phases across the department. According to agency documentation, HR Smart will enable HR staff to better manage information on employee benefits and compensation; electronically initiate, route, and receive approval for personnel actions; monitor workforce planning efforts and vacancies by medical center and across the department; and generate reports and queries. As VA continues to develop and implement its new HR system, VHA HR staff must rely on several separate enterprise-wide IT systems to handle core HR activities such as managing personnel actions and hiring and recruiting efforts. HR staff with whom we spoke stated that the amount of time they spent entering duplicate data into four or more non- interoperable systems and reconciling data between the systems has made their jobs more difficult and has taken time away from performing other critical HR duties. According to VA officials, once HR Smart is fully implemented, it should reduce HR offices’ reliance on multiple HR systems and local tools and help to streamline HR processes. For example, according to program documentation, VA plans to implement functionality in HR Smart that will allow managers to initiate, review, and approve basic personnel actions independently. In these cases, HR staff would no longer be responsible for data entry. In conclusion, VHA’s challenges recruiting and retaining clinical and HR employees are making it difficult for VHA to meet the health care needs of our nation’s veterans. The prior reports on which this testimony is based made three recommendations to VA aimed at improving the oversight of nurse recruitment and retention initiatives and seven recommendations directed at strengthening VHA’s HR capacity. Key recommendations included developing a process to help monitor medical centers’ compliance with key nurse recruitment and retention initiatives and establishing clear lines of authority between VA and VHA’s central personnel offices and those offices in individual medical centers to hold them accountable for improving HR functions. VA concurred with our recommendations and said they are taking steps to implement them. We will monitor VA’s progress in addressing our recommendations and report the results of those efforts to Congress. Chairman Wenstrup, Ranking Member Brownley, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you have any questions on matters discussed in this statement, please contact Robert Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov, or Debra Draper at (202) 512-7114 or by email at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Lori Achman, Assistant Director, Janina Austin, Assistant Director, Tom Gilbert, Assistant Director, Heather Collins, Analyst-in-Charge, Dewi Djunaidy, Sarah Harvey, Meredith Moles, Steven Putansu, Susan Sato, and Jennifer Stratton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VHA's ability to attract, hire, and retain top talent is critical to its mission to provide quality and timely care for our nation's veterans. GAO's prior work has found that VHA faces long-standing, systemic human capital challenges that limit its ability to improve delivery of health care services to veterans. This statement is based on GAO reports issued between September 2015 and December 2016 and discusses (1) the difficulties VHA is facing in recruiting and retaining staff for key clinical positions; and (2) VHA's capacity to perform key HR functions essential to addressing these difficulties. To conduct these studies, GAO reviewed VHA policies, directives, and other documents; analyzed VHA data; applied relevant federal internal control standards; and interviewed VA and VHA officials and staff in headquarters offices, as well as in eight VHA medical centers across the country. Challenges in recruiting and retaining both clinical and human resources (HR) employees along with weak HR-related internal control practices are undermining the Department of Veterans Affairs' (VA) Veterans Health Administration's (VHA) ability to meet the health care needs of veterans. In July 2016, GAO found that VHA losses in its 5 clinical occupations with the largest staffing shortages, including physicians, registered nurses, and psychologists, increased from about 5,900 employees in fiscal year 2011 to about 7,700 in fiscal year 2015. Voluntary resignations and retirements were the primary drivers. VHA's exit survey indicated that advancement issues or dissatisfaction with certain aspects of the work were commonly cited as the primary reasons people left. In September 2015, GAO found that VHA had multiple initiatives to recruit and retain its nurse workforce, but three of the four VA medical centers GAO reviewed faced challenges offering the initiatives due to, for example, a lack of sufficient HR support and competition with private sector medical facilities. GAO also found that VHA had not evaluated the training resources provided to nurse recruiters at VA medical centers. As a result, VHA is unable to determine to what extent its nurse recruitment and retention initiatives are effective and whether VHA has an adequate and qualified nurse workforce to meet veterans' health care needs. In December 2016, GAO found that VHA's limited HR capacity combined with weak internal control practices undermined VHA's HR operations and its ability to improve delivery of health care services to veterans. VA has exempted 108 clinical and administrative occupations from the recent hiring freeze; however, HR occupations are not among the exempt positions. A prolonged freeze could further erode VHA's capacity to provide HR services such as recruiting and hiring of staff who provide medical care to veterans.
Background Established in 1985, LOGCAP is an Army program that preplans for the use of global corporate resources to support worldwide contingency operations. In the event that U.S. forces deploy, contractor support is then available to a commander as an option. Examples of the types of support available include supply operations, laundry and bath, food service, sanitation, billeting, personnel support, maintenance, transportation, engineering and construction, and power generation and distribution. LOGCAP has been used to support U.S. forces in operations in Somalia, Haiti, and Bosnia and is currently being used to support operations in Afghanistan, Iraq, Kuwait, and Uzbekistan, as well as in other countries. The use of LOGCAP to support U.S. troops in Iraq is the largest effort in the history of LOGCAP. The LOGCAP contract comprises a series of task orders that commit both the contractor to provide services and the government to pay for those services. Some of the task orders are considered undefinitized contracting actions because the terms, specifications, and price of the task orders are not agreed upon before performance begins. Undefinitized contract actions are used when (1) government interests demand that the contractor be given a binding commitment so that work can begin immediately and (2) negotiating a definitive contract is not possible in sufficient time to meet the requirement. The Defense Federal Acquisition Regulation Supplement (DFARS) requires that undefinitized contract actions must include a not-to-exceed cost and a definitization schedule. DFARS also requires that the contract be definitized within 180 days or before 50 percent of the work to be performed is completed, whichever occurs first. The head of an agency may waive the requirement. Both LOGCAP and the Balkans Support Contract are cost-plus-award-fee contracts. Cost-plus-award-fee contracts entitle the contractor to be reimbursed for reasonable, allowable, and allocable costs incurred to the extent prescribed in the contract. The advantage of cost-plus-award-fee contracts is that they provide financial incentives based on contractor’s performance and criteria stated in the contract. These contracts enable the government to evaluate a contractor’s performance according to specified criteria and to grant an award amount within designated parameters. Thus, award fees can serve as a valuable tool to help control program risk and encourage excellence in contract performance. But to reap the advantages that cost-plus-award-fee contracts offer, the government must implement an effective award fee process. Responsibility for the LOGCAP contract is divided among multiple DOD and service components. AMC is the Army executive agent for LOGCAP, and it has organized the program under its Army Field Support Command (AFSC). According to Army regulation, as the executive agent, AMC is responsible for coordinating LOGCAP requirements (and the requirements of any other AMC umbrella support contracts) with the unified commands, other services, and Army-supported combatant commanders for AMC contractor support. AMC has assigned responsibility for LOGCAP to the commander of AFSC, who has task-organized LOGCAP under three separate offices, all of which report directly to him. These three offices are (1) the LOGCAP Program Manager, (2) the LOGCAP Contracting Office, and (3) the LOGCAP Support Unit. The key contract management roles and responsibilities for these three offices are detailed in table 1, along with the management roles and responsibilities of LOGCAP customers. DCMA also plays a role in overseeing contract activities. When requested by the procuring contracting officer, DCMA monitors a contractor’s performance and management systems to ensure that the cost, product performance, and delivery schedules comply with the terms and conditions of the contract. As of November 2004, DCMA had 46 employees in Iraq monitoring multiple DOD contracts, including the LOGCAP contract. DCAA performs contract audits of the LOGCAP contract and provides accounting and financial advisory services regarding contracts and subcontracts for AFSC. These services are provided in connection with the negotiation, administration, and settlement of contracts and subcontracts. The Army Has Taken Steps to Improve LOGCAP Management and Oversight Overall, the Army has taken numerous actions, or is in the process of taking actions to improve the management and oversight of LOGCAP as well as related contracts, based on our earlier reporting. Some of the initiatives the Army has completed or has under way that should contribute to stronger management of LOGCAP include (1) rewriting its guidance, including its field manual for the use of contractors on the battlefield, and its primary regulation for obtaining contractor support in wartime operations; (2) implementing near- and longer-term training for commanders and logisticians; (3) developing a deployable unit to provide training and assistance for commands using LOGCAP; (4) restructuring the LOGCAP contracting office to provide additional personnel resources in key areas; and (5) eliminating the backlog of contract task orders requiring definitization and conducting award fee boards in order to improve the financial oversight and control of LOGCAP. Guidance Has Been Rewritten The absence of guidance on how to effectively use LOGCAP was cited in our 1997 report as an area that needed improvement, and since that time the Army has rewritten two key documents that provide guidance on using LOGCAP. In January 2003, the Army reissued Field Manual 3-100.21, Contractors on the Battlefield, and it is currently rewriting Army Regulation 715-9, Contractors Accompanying the Force. These documents should significantly improve the supported forces’ understanding of the Army policies, responsibilities, and procedures for using contractors effectively on the battlefield. The Army’s rewritten field manual provides guidance for commanders and their staff at all levels in the planning, management, and use of contractors in each area of operations, as well as guidance describing the relationship between contractors and both the combatant commanders and the Army’s service component commanders. The manual addresses supported forces’ roles and responsibilities in planning contractor support; deploying and redeploying contractor personnel and equipment; and managing, supporting, and protecting contractors. It also addresses the planning process and relates the planning for contractor support to the military decision-making process. The Army’s regulation for contractors accompanying the force is still in draft; however, when completed, we believe it will establish Army policy for planning and managing contracted support. According to an information paper on the draft regulation, it proposes significant changes in three areas. The most significant policy change in terms of contract management and oversight is the recommendation that the supported unit (that is, the customer) be responsible for providing day-to-day control of contractors’ activities. Contract managers will continue to be responsible for the business aspects of managing the contractor workforce. The other two changes deal with (1) the accountability and support of contractor employees and (2) the medical screening, training, and equipping of contractor employees prior to deployment. An Army official working on the draft regulation said that once the regulation is finalized, the field manual will be revised to incorporate the changes. Training and Assistance Programs Are Being Developed Training and assistance programs have been or are being developed to improve the understanding of the contract and how it is managed and controlled. A 1999 initiative was the creation of a deployable unit, known as the LOGCAP Support Unit, to assist commanders in planning for and using the contract effectively. The unit consists of 66 Army Reserve soldiers with specialties in logistics, engineering, quartermaster duties, transportation, and ordinance. Because customers often have little knowledge of contract processes, the unit has developed training materials that address the issues of planning, operational impacts, execution responsibilities, and keys to success. This training addresses preparing statements of work, independent government cost estimates, and the contractor’s cost estimates and technical plans and has been presented at the Quartermaster School, the Battle Command Training Program, and DCMA’s predeployment training. The LOGCAP Support Unit has also taken steps to increase the size of the unit and improve its training. As we reported in July 2004, the unit was deployed in the early stages of Operation Iraqi Freedom, and when the original members returned home, replacement teams were created and staffed with individuals who had no prior LOGCAP or contracting experience. Since then, the unit has developed a program of instruction to enhance LOGCAP Support Unit members’ skills in key areas. As of November 2004, two sessions of the training have been conducted for all members of the unit who are not deployed. The LOGCAP Support Unit has also worked with the LOGCAP Program Manager’s office and DCMA to ensure the consistency of information being provided in each office’s training. The LOGCAP Program Manager’s office, in conjunction with the LOGCAP Support Unit, has also made efforts to educate the users of LOGCAP services about their responsibilities. When the office has become aware of units preparing for deployment, it has dispatched teams tasked with briefing commanders on the contract and their responsibilities. LOGCAP training has also been presented at senior-level symposiums and made a part of several warfighter exercises. We did not follow up on DOD’s efforts to integrate LOGCAP into professional military education because DOD is in the process of developing a training module that could be utilized by each of the mid- and senior-level service schools. AFSC Has Restructured the LOGCAP Contracting Office to Provide Additional Personnel Resources in Key Areas Recently, AFSC restructured the LOGCAP Contracting Office to provide additional resources in key areas. This includes dividing procuring contracting officer functions and contracting branch chief functions as well as establishing definitization and award fee board coordinators. The command also established a Deputy Division Chief position. To assist in the timely resolution of issues in the theater, the command deployed contracting officers to Kuwait and Iraq to establish closer working relationships with commanders and DCMA personnel located there. AFSC is also in the process of reorganizing its contracting office. In response to an August 2004 memorandum from the Deputy Assistant Secretary of the Army for Policy and Procurement to AMC’s Director of Contracting stating that it seemed appropriate to have a member of the Senior Executive Service manage LOGCAP, given its high dollar value, AFSC is in the process of establishing a senior executive position to oversee the AFSC Acquisition Center. A key function of this executive is to provide the AFSC commander with additional leadership and expertise in the LOGCAP arena. The command also established a sustainment branch to develop and implement an acquisition strategy for the follow-on to the LOGCAP contract. This branch will also lead the command’s efforts to transition existing LOGCAP work to sustainment contracts. Improvements Have Been Made in Definitizing Contracts and Conducting Award Fee Boards In our February 1997 report and again in our July 2004 report, we noted that the Army had not definitized LOGCAP task orders within the time frames prescribed in DFARS. Definitization is the process by which the government and the customer come to agreement or a determination is made on the terms, specifications, and price of the task orders. DFARS requires that undefinitized contract actions be definitized within 180 days or before 50 percent of the work to be performed is completed, whichever occurs first. Definitization is important because until the estimate is formalized, the contractor has no real incentive to control costs, as increased project costs potentially mean a higher project estimate, potentially resulting in a higher award fee. Definitization is also a necessary first step before the Army can conduct award fee boards that evaluate the contractor’s performance. In our 2004 report on contracting procedures in Iraq, we recommended that the Army definitize outstanding contracts and task orders as soon as possible. Progress is being made in definitizing task orders. When we issued our report in July 2004 on the Army’s use of LOGCAP to support ongoing military operations, the Army had definitized only 13 of 54 task orders that required definitization. As of March 2005, the Army had initiated 11 additional task orders (bringing the total to 65 task orders that require definitization) and has completed the definitization on 31 additional task orders (bringing the total to 44). The Army also reports that it will complete definitization of the remaining 21 by March 31, 2005. To help with definitizing the two largest task orders—task order 59, which provides base camp services, accommodations, and life support services at various locations in Iraq, and task order 43, the theater transportation mission—the Army established two special cost analysis teams. These teams are led by senior officials with extensive contracting and negotiating background, augmented by a contractor. In addition, three more teams have been assembled to help definitize the remaining backlogged task orders as well as all newly issued, undefinitized contract actions. Progress has also been made in conducting award fee boards since our July 2004 report noted that the Army had not yet conducted an award fee board for any of the LOGCAP task orders even though the contract requires an award fee board to be held every 6 months. Award fee boards are a mechanism for the government to evaluate the contractor’s overall performance and can serve as a valuable tool to control program risk and encourage the contractor’s performance. According to AFSC, 41 undefinitized task orders require award fee boards, and as of mid-March 2005, the Army had conducted award fee boards for 22 of the 41 task orders. It should be noted, however, that the Army converted 12 task orders and plans to convert an additional 3 that required definitization to fixed fee contracts, thereby negating the need to hold award fee boards for these task orders. According to an AFSC contracting official, the decision to convert these task orders was based on a number of factors, including the small size of the task order, the cost to the government to conduct the boards, the Army’s ability to acquire meaningful customer participation, and whether performance is complete on the contract. We stated in our July 2004 report that the government may find it difficult to conduct a board that comprehensively evaluates contractor performance because customers have not been documenting their LOGCAP experience. Enhanced Management and Oversight of LOGCAP Contract Activities Are Needed in Two Areas While improvements have been made in a number of areas, there are two areas where management and oversight are lacking. First, there is no formal process for seeking economy and efficiency in the use of LOGCAP. In our July 2004 report, we recommended that teams of subject matter experts be created to travel to locations where contractor services are being provided to evaluate the support. DOD concurred with our recommendation. However, as of February 2005, teams had not been created or deployed to review contract activities. Second, there is a lack of coordination of contract activities between all of the LOGCAP parties. AMC is the executive agent for LOGCAP, but several other DOD components also have important LOGCAP responsibilities, and these components must work in coordination with AMC to ensure the contract’s effective and efficient use. However, AMC does not have command authority over the other components and, while it has sought to influence how the other components carry out their roles, its influence is limited outside the command. We believe that this dispersed responsibility has led to numerous instances of inadequate coordination, which we have cited in our earlier reports. Steps Needed to Ensure That Contractors Provide Services in an Economical and Efficient Manner Have Not Been Taken at All Task Order Locations Our previous work has shown that when government officials (including customers) review a contractor’s work for economy and efficiency, savings are generated. For example, U.S. Army Europe’s reviews of contract activities under the Balkans Support Contract resulted in approximately $200 million in savings, or 10 percent of estimated project costs, by reducing services and labor costs and by closing or downsizing camps that were no longer needed. U.S. Army Europe officials told us that our 2000 report on the management of the Balkans Support Contract was a “wake up call” to them to be more engaged in managing the contract. Also, when Marine Corps forces replaced Army forces in Djibouti in December 2002, they took over the responsibility for funding LOGCAP services there. Marine commanders immediately undertook a complete review of the statement of work and were able to reduce the $48 million task order by an estimated $8.6 million, or 18 percent. In Iraq, the coalition forces military command reviewed task order 59, change 7 (the task order for life support services in Iraq) and was able to reduce the estimated cost of the task order by over $108 million by eliminating services and an extra dining and laundry facility. Regularly scheduled reviews of all task orders, however, were not taking place in Kuwait or Iraq, and we recommended that teams of subject matter experts be created to travel to locations where contractor services are being provided to evaluate the support and make recommendations on (1) the appropriateness of the services being provided, (2) the level of services being provided, and (3) the economy and efficiency with which the services are being provided. In response to our recommendation, DOD stated that it would issue a policy memorandum that would identify the need to have teams of subject matter experts make periodic visits to evaluate and make recommendations on the logistics support contracts. However, as of February 2005 no policy memorandum has been issued and no teams of subject matter experts have been established or deployed to review contract activities. While DOD continues to agree with our recommendation, its point of contact on our LOGCAP work, in the Office of the Undersecretary of Defense for Logistics and Materiel Readiness, told us that the need to address statutory requirements has taken precedence. However, some individual efforts have been undertaken to reduce costs but not as part of a formal review process. For example, requests for services costing more than $50,000 now require a review by a general officer. Also, in December 2004 the commanding general of military forces in Iraq requested that the Army Audit Agency evaluate LOGCAP throughout Iraq to identify fiscal and managerial efficiencies; the effectiveness of contract administration and its impact on cost controls; areas vulnerable to fraud, waste, and abuse; systemic processes and procedures that inherently result in increased costs; and methods for improving the timeliness and accuracy of information presented to assist senior leaders in making timely decisions. He also asked that the Army Audit Agency assess the adequacy of internal controls. The Coordination of Contract Activities Needs Additional Management Attention The effective use of the LOGCAP contract largely depends on the combined efforts of a number of separate DOD components, including AMC, the combatant commander, deployed units, DCMA, and DCAA. For example, an AMC pamphlet that provides users with a basic understanding of LOGCAP identifies the responsibility to monitor contractor performance as one that is shared by AMC, DCMA, and the customer. Altogether, the pamphlet identifies 22 LOGCAP responsibilities, of which 16 are shared by two or more components. Only six responsibilities are the sole responsibility of one component. As the executive agent for LOGCAP, AMC is responsible for directing the worldwide, regional, and country-specific planning, development, and execution of a LOGCAP contract. However, while AMC has sought to influence the manner in which the other components carry out their roles, AMC does not have command authority over the components, and thus its influence is limited. We believe that this limitation contributes to an overall lack of coordination across the various DOD components that are involved with LOGCAP, and consequently less effective utilization of the LOGCAP contract. For example, we identified the following coordination problems in our previous reports and current work: The Army Central Command—the Army command responsible for LOGCAP planning in Iraq and Kuwait—did not follow the planning process described in Army regulations and guidance as it prepared for operations in southwest Asia. While AMC was aware that the Army Central Command’s plan for the use of the contract was not comprehensive, it lacked the authority to direct better planning. An acquisition review board in Kuwait was presented with several large preexisting task orders that were to expire within a few weeks, giving the board little time to consider alternatives to LOGCAP or review the requirements to ensure that they did not provide an excessive level of service. Again, AMC was aware that the planning was inadequate but lacked the authority to direct better planning. Effective oversight processes were not established by customers at several locations. A senior Army division-level logistician who returned from Iraq in late 2004 told us that there was nothing in the division’s operations orders that identified its responsibilities in managing or overseeing LOGCAP contract activities. Furthermore, the logistician had not seen the contract statement of work that described the division’s requirements nor had he seen the contractor’s technical execution plan that described how the contractor planned to meet the division’s requirements. He also said that the division had not prepared any formal assessment of the contractor’s performance that could be used at award fee boards. AMC has no authority to direct contract oversight by LOGCAP customers. In our July 2004 report, we discussed a disagreement between the LOGCAP contractor and DCAA involving at least $88 million in food service charges to feed soldiers in Iraq. This occurred because the Army had defined a population for each base camp in the statement of work and had directed the contractor to feed that number. The actual number of soldiers served, however, was lower than the number specified in the contract for most locations. The contractor requested payment based on the base camp numbers in the contract but DCAA believes that the contractor should have been paid on the basis of the actual number of meals served. These differing views created a billing disagreement. According to the 101st Airborne Division official responsible for coordinating LOGCAP activities in the division’s sector in Iraq, the division was not aware of the cost implications of the disparity. He also said that the next higher headquarters for the 101st was not interested in the number of people who were using the dining facility, unless the number exceeded the number contracted for in the statement of work. Information for award fee boards was not systematically collected from some customers, making it difficult to hold a board that could comprehensively evaluate the contractor’s performance. Award fee boards can serve as a valuable tool to control program risk and encourage contractors’ performance. AFSC recently told us that it had to convert some LOGCAP task orders to cost-plus-fixed-fee task orders partly because it lacked the information to hold an award fee board. AMC is aware of these problems and has attempted to influence how the other DOD components carry out their roles by deploying personnel to assist the customer in using the LOGCAP contract effectively. However, while AMC can ask the DOD components to carry out their responsibilities, it cannot direct their activities. This affects the extent to which it can control how effectively the contract is utilized. For example, in response to a series of questions we posed to AFSC regarding managing LOGCAP, an AFSC official provided the following examples where it has no ability, or limited ability, to influence contract activities: Decisions on the level and frequency of services provided under the contract are the combatant commanders’, based on operational requirements. Commanders on the ground ultimately make decisions regarding the composition of task orders and required services based on their operational needs. While AFSC provides input to the planning process, once the commander on the ground makes a decision, AFSC’s mission is to execute that action within established legal, regulatory, and contractual parameters. As an example, an AFSC official said that the command aggressively pursued the reduction of the major task order for services in Iraq (Task Order 59) with the customer. However, the customer’s decision was to maintain the task order in its current form with a planned increase in scope for the follow-on effort. Consequently, AFSC will execute the customer’s requirement. AFSC’s procuring contracting officer has the primary responsibility for monitoring the contractor’s performance, and DCMA serves as the contracting officer’s agent in theater to monitor the performance of the contractor. However, DCMA makes an independent assessment regarding the level of staffing and resources allocated to perform its mission. AMC’s command relationship to the other DOD components is shown in figure 1. As shown, the DOD components with LOGCAP responsibilities have separate chains of command leading to the Secretary of Defense and only the Office of the Secretary of Defense is in a position to exercise overall coordination of the four components. To address coordination issues between the components, AFSC has focused on training commanders in using the LOGCAP contract effectively and deploying personnel to work with commanders to improve their understanding of contract oversight practices. However, AFSC officials acknowledge that change will be slow because of the turnover of units and personnel in southwest Asia. Given the $6.8 billion that the Army plans to spend on LOGCAP contract activities in fiscal year 2005, the importance of the contract to the success of current military operations, and the existing command authorities, we believe that more direct oversight and coordination is needed. This oversight would need to be at a level sufficiently high enough to ensure participation in deliberations and vested in an individual with sufficient stature to effectively advocate for the most efficient use of the contract. We are not suggesting a change in command and control relationships or contractual authority. The view that high level oversight and coordination are needed is also shared by the former Deputy Commanding General for Logistics in Iraq, who told us that he believes someone was needed to provide overall coordination for the program and by a senior AFSC official who told us that there was confusion over program leadership and that there would be value in having someone of general officer stature that could interact with all the DOD components having LOGCAP responsibility to advocate for the most effective use of the contract. In commenting on a draft of this report, the LOGCAP Support Unit commander similarly said that better coordination between the DOD components would improve contract oversight. The commander added that doctrine development and training are a critical part of the solution and that in AMC’s current LOGCAP doctrine, there is no “user guide” that addresses user responsibilities in using the LOGCAP contract. Our February 1997 report identified the need for better guidance, and earlier in this report we discussed the Army’s ongoing efforts to improve its guidance. Conclusions In response to our prior reports, the Army has taken or is in the process of taking steps designed to improve the management and oversight of LOGCAP as well as related contracts and it continues to proactively look for additional areas for improvement. This proactive work includes the recent establishment of a Senior Executive Service position to manage LOGCAP within AFSC. However, many other DOD components have responsibilities under LOGCAP. At the DOD level, no one is in a position to coordinate these components in using the contract. This lack of coordination has resulted in problems in the use of the contract. While we are not suggesting a change in command and control relationships or contractual authority, we believe that establishing a LOGCAP coordinator within DOD with responsibility for coordinating the use of LOGCAP and with the authority to participate in deliberations and advocate for its most effective use has the potential to improve the manner in which LOGCAP is used and managed. Our July 2004 report recommended that teams of subject matter experts be created to travel to locations where contractor services are being provided to evaluate the support of and make recommendations on the appropriateness of the services being provided, the level of services being provided, and the economy and efficiency with which the services are being provided. We continue to believe that this recommendation has merit and would generate savings. Recommendation for Executive Action To make more effective use of LOGCAP we recommend that the Secretary of Defense take the following actions: Designate a LOGCAP coordinator with the authority to participate in deliberations and advocate for the most effective and efficient use of the LOGCAP contract. Areas where we believe this coordinator should provide oversight include (1) reviewing planning for the use of LOGCAP to ensure it is in accordance with Army doctrine and guidance; (2) evaluating the types and frequency of services to be provided; and (3) evaluating the extent to which the contract is being used economically and efficiently. Direct the coordinator to advise the Secretary of unresolved differences among the DOD components on how best to use LOGCAP, and to report to the Secretary periodically regarding how effectively LOGCAP is being used. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement on the actions taken on our recommendations to the Senate Committee on Government Affairs and House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Agency Comments and our Evaluation DOD provided written comments on a draft of this report, which were signed by the Acting Deputy Under Secretary of Defense for Logistics and Materiel Readiness. They are included in appendix II. DOD concurred with the report and its recommendations, and described the steps it plans to take to implement our recommendations. Regarding our recommendation that the Secretary of Defense designate a LOGCAP coordinator with the authority to participate in deliberations and advocate for the most effective and efficient use of the contract, DOD stated that it recently issued a new DOD instruction entitled “The Defense Logistics and Global Supply Chain Management System,” which identifies the Under Secretary of Defense for Acquisition, Technology, and Logistics as the Defense Logistics Executive; establishes a Defense Logistics Board; and defines the department's logistics and global supply chain management system as including all DOD activities that provide the combatant commanders with materiel support. According to DOD, oversight of logistics support contracts such as the Army's LOGCAP contract is within the authority and responsibility of the Defense Logistics Executive, and the Defense Logistics Board will include logistics support contracts as part of its mandate to “advise the Defense Logistics Executive on oversight of the Defense logistics and global supply chain management system.” Regarding our recommendation that the coordinator be directed to advise the Secretary of unresolved differences among the DOD components on how best to use LOGCAP, DOD stated that the Defense Logistics Executive, with the advice and assistance of the Defense Logistics Board, would do so. We are sending copies of this report to the Chairman and Ranking Minority Members, House and Senate Committees on Armed Services; the Chairman and Ranking Minority Members, Subcommittees on Defense, House and Senate Committees on Appropriations; Chairman and Ranking Minority Member, House Committee on Government Reform; and other interested congressional committees. We are also sending a copy to the Director, Office of Management and Budget, and we will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (202) 512-8365 or by e-mail at solisw@gao.gov. Major contributors to this report are included in appendix III. Scope and Methodology To determine the actions the Army has taken for improving the management and oversight of the Logistics Civil Augmentation Program (LOGCAP), we met with representatives of the Army Field Support Command’s (AFSC) LOGCAP Program Manager, LOGCAP Contracting Office, and LOGCAP Support Unit to gain a comprehensive understanding of the status of efforts regarding the LOGCAP contract, the contract management process, and issues related to using the contract effectively. We drew upon our prior work, including visits to U.S. military sites using the LOGCAP contract in Kuwait and units that had returned from Iraq. Among the units that had returned from Iraq, we met with representatives of the 101st Airborne Division and the 1st Armored Division. We also met with customers who used the LOGCAP contract, including logistics planners from the Army Central Command, who were responsible for planning for the use of LOGCAP in Operations Enduring Freedom and Iraqi Freedom, to discuss their experiences, and with contracting officials within the same command who played a role in contract management and oversight. To identify further opportunities to use the contract effectively, we undertook a number of actions. We interviewed the former Deputy Commanding General for Logistics in Iraq to discuss his experiences in using LOGCAP. We also met with senior logistics officials from U.S. Army Europe who were responsible for the Balkans Support Contract. As we stated earlier in this report, the Balkans Support Contract is similar to the LOGCAP contract and was established in 1997 when there was a change in LOGCAP contractors. The purpose of our visit was to discuss their lessons learned in controlling the Balkans Support Contract and the actions they had taken to improve the overall management of that contract. We visited or spoke with individuals at the following locations during our review: Department of the Army: Office of the Deputy Chief of Staff-Logistics, Pentagon U.S. Army Europe, Heidelberg, Germany U.S. Army Central Command (Rear), Fort McPherson, Ga. U.S. Army Corps of Engineers—Trans Atlantic Program Center, Winchester, Va. 1st Armored Division, Wiesbaden Army Airfield, Wiesbaden, Germany U.S. Army Materiel Command, Fort Belvoir, Va. U.S. Army Field Support Command, LOGCAP Contracting Office, Rock Island, Ill. U.S. Army Field Support Command, LOGCAP Program Office, Fort Belvoir, Va. U.S. Army Field Support Command, LOGCAP Support Unit, Fort Belvoir, Va. We conducted our review from October 2004 through January 2005 in accordance with generally accepted government auditing standards. Comments from the Department of Defense GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the person named above, Glenn Furbish, Kenneth Patton, Jennifer Thomas, and Earl Williams made key contributions to this report. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
The Logistics Civil Augmentation Program (LOGCAP) is an Army program that plans for the use of a private-sector contractor to support worldwide contingency operations. Examples of the types of support available include laundry and bath, food service, sanitation, billeting, maintenance, and power generation. LOGCAP has been used extensively to support U.S. forces in recent operations in southwest Asia, with more than $15 billion in estimated work as of January 2005. While we issued two reports on LOGCAP since 1997 that made recommendations to improve the Army's management of the contract, broader issues on coordination of LOGCAP's contract functions were beyond the scope of our earlier work. This report assesses the extent to which the Army is taking action to improve the management and oversight of LOGCAP and whether further opportunities for using this contract effectively exist. The Army has taken or is in the process of taking actions to improve the management and oversight of LOGCAP on the basis of our earlier reporting. The actions that the Army has completed or has underway include (1) rewriting its guidance, including its field manual for using contractors on the battlefield and its primary regulation for obtaining contractor support in wartime operations; (2) implementing near- and longer-term training for commanders and logisticians in using the contract; (3) developing a deployable unit to assist commands using LOGCAP; (4) restructuring the LOGCAP contracting office to provide additional personnel resources in key areas; and (5) taking steps to eliminate the backlog of contract task orders awaiting definitization--that is, coming to agreement on the terms, specifications, and price of the task orders--and conducting award fee boards. While improvements have been made, GAO believes that the Department of Defense (DOD) and the Army need to take additional action in two areas. First, although DOD continues to agree with our July 2004 recommendation to create teams of subject matter experts to review contract activities for economy and efficiency, it has not done so yet because the need to respond to statutory requirements took precedence. Prior GAO reviews have shown that when commanders look for savings in contract activities, they generally find them, as illustrated in the table. The second area needing attention is the coordination of contract activities between DOD components involved with using LOGCAP. While the Army Materiel Command (AMC) is the executive agent for LOGCAP, other DOD components also play important LOGCAP roles, including the combatant commander, individual deployed units, and the Defense Contract Management Agency. The effective and efficient use of the contract depends on the coordinated activities of each of these agencies. However, at the DOD level, no one is responsible for overall leadership in using the contract and, while AMC has sought to influence the way in which the other components carry out their roles, it does not have command authority over the other components and thus its influence is limited. For example, AMC knew that planning for the use of LOGCAP for Operation Iraqi Freedom was not comprehensive but lacked the command authority to direct better planning. AMC officials believe that training will resolve these problems over time. However, given the importance of LOGCAP to supporting military operations and the billions of dollars being spent on LOGCAP activities, we believe that more immediate and direct oversight is needed.
Background TEA-21 requires that before FTA may approve a grant or loan for a proposed new starts project, it must determine that the proposed project is based upon the results of an alternatives analysis and preliminary engineering and justified by a comprehensive review of the proposed project’s mobility improvements, environmental benefits, cost-effectiveness, and operating efficiencies. Under TEA-21, in applying these justification criteria, FTA must consider a number of factors, including land-use policies and congestion relief. In addition, a project must be supported by an acceptable degree of local financial commitment, including evidence of stable and dependable financing sources to construct, maintain, and operate the system or extension. In evaluating this commitment, FTA is required to determine whether (1) the proposed project plan provides for contingencies in order to cover unanticipated cost increases; (2) each proposed local source of capital and operating funds is stable, reliable, and available within the timetable for the proposed project; and (3) local resources are available to operate the overall proposed mass transportation system without requiring a reduction in existing mass transportation services. FTA is also required to consider a number of additional factors when evaluating a project’s local financial commitment. While these evaluation requirements existed prior to the enactment of TEA-21, TEA-21 requires FTA, for the first time, to (1) develop a rating for each criterion as well as an overall rating of highly recommended, recommended, or not recommended for each project and to include this information in its annual new starts report due to the Congress each February, and (2) issue regulations on the manner in which it will evaluate and rate potential new starts projects. TEA-21’s deadline for the regulations was October 7, 1998. TEA-21 also directs FTA to use these evaluations and ratings in approving projects’ advancement to the preliminary engineering and final design phases and in deciding which projects will be recommended to the Congress for funding or receive full funding grant agreements. In addition, TEA-21 requires FTA to issue a supplemental report to its annual report to the Congress each August that updates information on projects that have advanced to the preliminary engineering or final design phases since the annual report. FTA Has Made Substantial Progress in Developing and Implementing a New Starts Evaluation Process That Reflects TEA-21 Requirements FTA has made substantial progress in developing and implementing an evaluation process that includes the individual criterion ratings and overall project ratings required by TEA-21. Before TEA-21 was enacted in June 1998, FTA had already taken significant steps to revise its new starts evaluation process, since most of the evaluation requirements contained in TEA-21 were introduced by ISTEA. In March 1999, FTA issued its fiscal year 2000 new starts report, which included project evaluations and ratings based upon the revised process. Table 1 highlights key dates in the chronology of FTA’s development of its new starts evaluation process. In response to ISTEA’s expansion of the evaluation criteria, FTA issued a policy paper in September 1994 proposing its overall assessment strategy and criteria measures. FTA circulated this paper among interested parties, including state and local governments, transit agencies, metropolitan planning organizations, and consultants. In December 1996, after reviewing the comments received, FTA issued a notice describing the revised criteria it would use in 1997 to evaluate candidate new starts projects for fiscal year 1999. The notice also introduced measures for mobility improvements, environmental benefits, operating efficiencies, cost-effectiveness, and transit-supportive land use. FTA’s existing process for assessing and rating a project’s local financial commitment did not change. In September 1997, FTA issued technical guidance on the ISTEA new starts criteria to assist project sponsors in submitting the information and documentation that the agency needed to prepare the fiscal year 1999 evaluations. The guidance identified the new starts criteria, documented the reporting procedures for the criteria, and outlined how FTA would apply each of the criteria in evaluating candidate projects. For example, in evaluating mobility improvements, FTA said it would look at the expected savings in travel time. FTA also conducted workshops at which the new criteria and reporting requirements were discussed in detail with grantees. FTA’s fiscal year 1999 new starts report, submitted to the Congress in May 1998, included evaluations that were based upon these criteria. The supplemental report to the fiscal year 1999 annual report, required by TEA-21, was issued in January 1999. That report did not include the individual ratings on each criterion and overall project ratings required by TEA-21 because FTA had not yet incorporated these requirements at the time the report was prepared. According to FTA officials, the fiscal year 2000 supplemental report will include these ratings. Noting that TEA-21 made no changes to the existing new starts criteria and added relatively few factors for consideration in applying the criteria, FTA completed the fiscal year 2000 project evaluations using its existing new starts criteria. However, as described in more detail in the next section of this report, FTA revised its evaluation process to provide for the individual criterion ratings and overall ratings required by TEA-21. In September 1998, at the start of the fiscal year 2000 process, FTA used letters and a series of outreach sessions to explain to its grantees how it planned to rate projects for the fiscal year 2000 evaluations. It also issued an addendum to its technical guidance to assist grantees in submitting the required information for the fiscal year 2000 process. In its fiscal year 2000 new starts report submitted to the Congress on March 23, 1999, FTA provided evaluations, individual criterion ratings, and an overall project rating for 39 projects in the final design and preliminary engineering phases. FTA rated 8 projects as highly recommended, 11 projects as recommended, and 20 projects as not recommended. The report also included funding recommendations for 11 of the 39 projects that FTA rated. FTA’s Evaluation and Rating Process Assigns Individual Ratings on TEA-21 Criteria and Provides for Overall Project Ratings FTA’s current new starts evaluation process, which it followed to prepare its fiscal year 2000 new starts report, assigns candidate projects individual ratings for each TEA-21 criterion in order to assess each project’s justification and local financial commitment. The process also assigns an overall rating for each project. FTA considers these overall ratings in deciding which projects will be recommended for funding or receive full funding grant agreements. As figure 1 illustrates, FTA evaluates and rates projects in three stages. First, FTA evaluates and rates projects on each new starts criterion. Second, FTA uses these individual ratings on each criterion to assign summary project justification and local financial commitment ratings for each project. Finally, FTA then combines these two ratings to assign an overall project rating. As figure 1 shows, FTA’s current process provides for individual ratings for the four project justification criteria identified by TEA-21 (mobility improvements, environmental benefits, operating efficiencies, and cost-effectiveness) as well as for transit-supportive land-use policies.Similarly, to evaluate a project’s financial commitment, the project is rated on its capital and operating finance plans and the local share of project costs. According to FTA, the process also takes into account the factors for consideration identified in TEA-21, such as congestion relief. For example, FTA considers this factor in evaluating and rating a project’s cost-effectiveness. To develop and assign individual ratings on each criterion, FTA holds a series of meetings to review and analyze information submitted by project sponsors. For the fiscal year 2000 process, participants in these meetings included officials and staff from FTA’s Offices of Planning, Budget and Policy, and Program Management, and contractors who made the initial financial and land-use assessments. On the basis of an analysis of the documentation submitted by project sponsors, FTA assigns each project a descriptive rating of high, medium-high, medium, low-medium or low for each project justification and local financial commitment criterion. Appendix I summarizes the measures that FTA uses in applying the criteria to develop these ratings. Once the individual criterion ratings are completed, FTA assigns summary ratings of project justification and local financial commitment by combining the individual criterion ratings. In developing the summary project justification ratings, FTA gives the most weight to the criteria for transit-supportive land use, cost-effectiveness, and mobility improvements. For the summary local financial commitment rating, the project’s capital plan is given the most consideration. In assigning a summary financial commitment rating, FTA will not give a project a rating higher than low-medium if its capital finance plan received a low-medium or low rating. FTA combines these summary ratings to assign an overall project rating of highly recommended, recommended, or not recommended. To receive the highly recommended rating, a project must have summary ratings of at least medium-high for project justification and local financial commitment. To receive a rating of recommended, the project must have summary ratings of at least medium. A project is rated as not recommended when either summary rating is less than medium. In its fiscal year 2000 new starts report, as noted previously, FTA rated 8 projects as highly recommended, 11 projects as recommended, and 20 projects as not recommended. Of the 20 projects rated as not recommended, 18 received financial commitment ratings of less than medium. In assigning overall project ratings, however, FTA emphasized the continuous nature of project evaluation. Throughout the report, FTA underscored the fact that as candidate projects proceed through the project development process, information concerning costs, benefits, and impacts will be refined. Consequently, FTA will update its ratings and recommendations at least annually to reflect new information, changing conditions, and refined financing plans. Thus, a project that received a not recommended rating in the fiscal year 2000 report could receive a higher rating in the fiscal year 2001 report to reflect project changes. According to FTA, the overall project rating in the new starts report is intended to reflect a project’s merits at a particular point in time and does not translate directly into a funding recommendation or commitment in a given year. In deciding which projects will be recommended for funding or receive a full funding grant agreement, FTA considers projects with an overall rating of recommended or better. However, some projects rated as highly recommended or recommended may not be ready for funding because they are still in the early stages of preliminary engineering. In making funding recommendations, FTA gives first priority to projects with existing grant agreements. After these projects are accounted for, priority is given to projects that are ready to begin final design or construction. In accordance with these funding principles, the President’s fiscal year 2000 budget and FTA’s fiscal year 2000 new starts report recommended $962.72 million in funding for 25 projects. As table 2 shows, these recommendations included $668.18 million for 14 projects currently under construction, $216.11 million for seven projects expected to enter final design by the beginning of fiscal year 2000, and $32 million for four projects in the later stages of preliminary engineering. TEA-21 limits the amount of new starts funds that can be used for activities other than final design and construction to 8 percent of total new starts funding, or $78.43 million for fiscal year 2000. After accounting for the $32 million for the four recommended projects in preliminary engineering, $46.43 million remains available for other projects currently in or approved to enter preliminary engineering by the end of fiscal year 2000. FTA plans to allocate the remaining $46.43 million on the basis of its review of funding applications and project ratings. The seven remaining projects in preliminary engineering that received overall ratings of recommended or highly recommended but no funding recommendation in the fiscal year 2000 report would be eligible to seek this funding. Appendix II presents the ratings and funding recommendations in FTA’s fiscal year 2000 new starts report. FTA Needs to Issue Regulations to Satisify TEA-21 Requirements While FTA has implemented a new starts evaluation process that addresses the TEA-21 requirements, it still needs to issue final regulations to formalize the process. FTA did not meet the TEA-21 deadline of October 7, 1998, for issuing these regulations. According to FTA, priority was given to satisfying the rating requirements in TEA-21 and issuing the fiscal year 2000 report. FTA issued a notice of proposed rulemaking on April 7, 1999. The process described in the proposed rule mirrors the process FTA used to prepare the fiscal year 2000 report. Comments on the proposed rule are due on July 6, 1999. FTA plans to issue the final regulations in the summer of 1999. FTA has said that any changes resulting from comments on the proposed rule will be incorporated into the evaluation and rating process for the fiscal year 2001 annual report. We will continue to monitor FTA’s efforts to implement TEA-21 and report our results in our next annual report. Agency Comments We provided the Department of Transportation with a draft of this report for review and comment. We met with Federal Transit Administration officials, including the Director for Policy Development and officials from the Offices of Planning and of Budget and Policy. FTA agreed with the report’s contents and provided us with some technical comments, which we have incorporated where appropriate. Scope and Methodology To address the issues discussed in this report, we reviewed the legislation governing new starts transit projects, FTA’s annual new starts reports for fiscal years 1999 and 2000, its technical guidance on the new starts criteria, and other documentation by the agency of its processes and procedures for evaluating projects. We also interviewed appropriate FTA headquarters and regional officials, the contractors who conducted the financial and land-use assessments, and selected grantees whose projects were assessed in 1997 and 1998. In addition, we attended FTA’s outreach sessions, in which officials explained the new TEA-21 requirements and how FTA intended to meet these requirements. We also observed meetings at the agency in which the project ratings on financial commitment and land use were deliberated. We performed our work in accordance with generally accepted government auditing standards from July 1998 through April 1999. We are sending copies of this report to the Honorable Rodney E. Slater, Secretary of Transportation; the Honorable Gordon J. Linton, Administrator, Federal Transit Administration; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We are also making copies available to others on request. Major contributors to this report are listed in appendix III. Please call me at (202) 512-2834 if you have any questions about this report. New Starts Criteria and Related Performance Measures Table I.1 presents a summary of each of the new starts criteria and the related performance measures that the Federal Transit Administration uses to appraise candidate new starts projects as part of its evaluation and rating process. FTA’s Fiscal Year 2000 New Starts Ratings and Funding Recommendations Tacoma-Seattle (Sounder) Commuter Rail (continued) Baltimore - Central Corridor LRT Double Track Minneapolis - Hiawatha Corridor Transitway Raleigh-Durham - Research Triangle Regional Rail Austin – Northwest/North Central Corridor New York City - LIRR East Side Access N. New Jersey (Hudson-Bergen MOS-2) Orange County - Irvine-Fullerton Corridor Pittsburgh - Martin Luther King, Jr., E. Busway Extension Pittsburgh - Stage II LRT Reconstruction San Diego - Oceanside Escondido Corridor San Francisco – Bayshore - Third Street LRT (continued) Ferry Capital Projects in Alaska or Hawaii (Section 5309(m)(5)(A)) Major Contributors to This Report Resources, Community, and Economic Development Division, Washington, D.C. Atlanta Field Office Kirk Kiester, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on the Federal Transit Administration's (FTA) efforts to develop and implement evaluation and rating processes and procedures for evaluating new start transit projects for federal funding, focusing on: (1) the status of FTA's efforts; (2) how FTA implemented the Transportation Equity Act for the 21st Century (TEA-21) requirements for evaluating, rating, and recommending projects; and (3) open issues that FTA needs to resolve to fully satisfy TEA-21 requirements. GAO noted that: (1) FTA has made substantial progress in developing and implementing a new starts evaluation and rating process, as required by TEA-21; (2) FTA had already revised its new starts evaluation process, since the criteria and most of the factors that TEA-21 requires FTA to consider while applying the criteria were also contained in the Intermodal Surface Transportation Efficiency Act of 1991; (3) in 1997, FTA first applied these criteria for its fiscal year (FY) 1999 project evaluations; (4) in 1998, FTA expanded its evaluation process to include the TEA-21 requirement to rate projects as either highly recommended, recommended, or not recommended and to provide individual ratings on each criterion; (5) the evaluation process FTA followed to prepare its FY 2000 new starts report uses ratings based on specific financial and project justification criteria to build toward an overall project rating; (6) FTA uses this rating information in deciding which projects will receive full funding grant agreements and to make funding recommendations to the Congress in its annual new starts report; (7) while FTA has implemented a new starts evaluation and rating process for FY 2000 that addressed TEA-21 requirements, it has not issued final regulations on the evaluation and rating process, as required by the legislation; and (8) FTA issued a proposed rule on April 7, 1999, and plans to issue final regulations in summer 1999.
Background Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar- orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position relative to the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Currently, there is one operational POES satellite and two operational DMSP satellites that are positioned so that they cross the equator in the early morning, midmorning, and early afternoon. In addition, the government relies on a European satellite, called the Meteorological Operational (MetOp) satellite, for satellite observations in Together, the satellites ensure that, for any region the midmorning orbit.of the earth, the data provided to users are generally no more than 6 hours old. Besides the operational satellites, six older satellites are in orbit that still collect some data and are available to provide limited backup to the operational satellites should they degrade or fail. The last POES satellite was launched in February 2009. The Air Force plans to launch its two remaining DMSP satellites as needed. Figure 1 illustrates the current operational polar satellite constellation. Polar Satellite Data and Products Polar satellites gather a broad range of data that are transformed into a variety of products. Satellite sensors observe different bands of radiation wavelengths, called channels, which are used for remotely determining information about the earth’s atmosphere, land surface, oceans, and the space environment. When first received, satellite data are considered raw data. To make them usable, processing centers format the data so that they are time-sequenced and include earth-location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into channel-specific data sets, called sensor data records and temperature data records. These data records are then used to derive weather and climate products called environmental data records. These environmental data records include a wide range of atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; land surface products showing snow cover, vegetation, and land use; ocean products depicting sea surface temperatures, sea ice, and wave height; and characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 2 is a simplified depiction of the various stages of satellite data processing, and figure 3 depicts examples of two different weather products. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program—NPOESS—capable of satisfying both civilian and military requirements. The converged program, NPOESS, was considered critical to the nation’s ability to maintain the continuity of data required for weather forecasting and global climate monitoring. NPOESS satellites were expected to replace the POES and DMSP satellites in the morning, midmorning, and afternoon orbits when they neared the end of their expected life spans. To manage this program, DOD, NOAA, and NASA formed a tri-agency Integrated Program Office, with NOAA responsible for overall program management for the converged system and for satellite operations, the Air Force responsible for acquisition, and NASA responsible for facilitating the development and incorporation of new technologies into the converged system. When the primary NPOESS contract was awarded in August 2002, the program was estimated to cost about $7 billion through 2018. The program was to include the procurement and launch of 6 satellites over the life of the program, with each satellite hosting a subset of 13 instruments. The planned instruments included 11 environmental sensors, and two systems supporting specific user services (see table 1). To reduce the risk involved in developing new technologies and to maintain climate data continuity, the program planned to launch the NPP NPP was to demonstrate selected demonstration satellite in May 2006.instruments that would later be included on the NPOESS satellites. The first NPOESS satellite was to be available for launch in March 2008. In the years after the program was initiated, NPOESS encountered significant technical challenges in sensor development, program cost growth, and schedule delays. By November 2005, we estimated that the program’s cost had grown to $10 billion, and the schedule for the first launch was delayed by almost 2 years. These issues led to a 2006 decision to restructure the program, which reduced the program’s functionality by decreasing the number of planned satellites from 6 to 4, and the number of instruments from 13 to 9. As part of the decision, officials decided to reduce the number of orbits from three (early morning, midmorning, and afternoon) to two (early morning and afternoon) and to rely solely on the European satellites for midmorning orbit data. Even after the restructuring, however, the program continued to encounter technical issues in developing two sensors, significant tri- agency management challenges, schedule delays, and further cost increases. Because the schedule delays could lead to satellite data gaps, in March 2009, agency executives decided to use NPP as an operational satellite. reach about $15 billion and launch schedules that were delayed by over 5 years, the Executive Office of the President formed a task force, led by the Office of Science and Technology Policy, to investigate the management and acquisition options that would improve the NPOESS program. As a result of this review, in February 2010, the Director of the Office of Science and Technology Policy announced that NOAA and DOD would no longer jointly procure the NPOESS satellite system; instead, each agency would plan and acquire its own satellite system. Specifically, NOAA would be responsible for the afternoon orbit and the observations planned for the first and third satellites. DOD would be responsible for the early morning orbit and the observations planned for the second and fourth satellites. The partnership with the European satellite agencies for the midmorning orbit was to continue as planned. When this decision was announced, NOAA immediately began planning for a new satellite program in the afternoon orbit—called JPSS—and DOD began planning for a new satellite program in the morning orbit— called DWSS. Using NPP as an operational satellite means that the satellite’s data will be used to provide climate and weather products. Overview of Initial NOAA and DOD Plans for Replacement Satellite Programs After the decision was made to disband the NPOESS program in 2010, NOAA and DOD began planning for their respective satellite programs. For NOAA, these plans included: relying on NASA for system acquisition, engineering, and integration; completing, launching, and supporting NPP; acquiring and launching two satellites for the afternoon orbit, called developing and integrating five sensors on the two satellites; finding alternate host satellites for selected instruments that would not be accommodated on the JPSS satellites; and providing ground system support for NPP, JPSS, and DWSS; data communications for MetOp and DMSP; and data processing for NOAA’s use of microwave data from an international satellite. In 2010, NOAA estimated that the life cycle costs of the JPSS program would be approximately $11.9 billion for a program lasting through fiscal year 2024, which included $2.9 billion in NOAA funds spent on NPOESS through fiscal year 2010. Alternatively, DOD planned that its DWSS program would be comprised of two satellites, the first to be launched no earlier than 2018. Each satellite was to have three sensors: a Visible/Infrared Imager/Radiometer Suite, a Space Environment Monitor, and a microwave imager/sounder. As of September 2011, DOD planned to conduct a thorough system requirements review before finalizing DWSS functionality, cost, and schedule. Table 2 compares the planned cost, schedule, and scope of the three satellite programs at different points in time. We have issued a series of reports on the NPOESS program highlighting technical issues, cost growth, and key management challenges affecting the tri-agency program structure. For example, in June 2009, we added to our previous concerns about the tri-agency oversight of the NPOESS program. We reported that the Executive Committee responsible for providing direction to the program was ineffective because the DOD acquisition executive did not attend committee meetings; the committee did not track action items to closure; and many of the committee’s decisions did not achieve the desired outcomes. We also reported that the program’s cost estimates were expected to rise and that the launch schedules were expected to be delayed. To help address these issues, we made recommendations to, among other things, improve executive- level oversight and develop realistic time frames for revising cost and schedule baselines. Agency officials agreed with our recommendations and took steps to improve executive oversight. GAO, Polar-Orbiting Environmental Satellites: Agencies Must Act Quickly to Address Risks That Jeopardize the Continuity of Weather and Climate Data, GAO-10-558 (Washington, D.C.: May 27, 2010). identified. For example, NOAA transferred key staff from the NPOESS program to the JPSS program and coordinated with the Air Force to negotiate contract changes. Agencies Transferred Responsibilities to Their Respective Programs, but NOAA’s Is Being Downsized, and DOD’s Has Been Terminated Following the decision to disband NPOESS, both NOAA and DOD were responsible for transferring key management responsibilities to their respective programs. This entailed (1) establishing separate program offices for their respective follow-on programs, (2) establishing requirements for their respective programs, and (3) transferring contracts from NPOESS to the new programs. Both agencies made progress on these activities, but recent events have resulted in major program changes. Specifically, NOAA established its JPSS program office, established program requirements, and transferred most sensor contracts. However, the agency now plans to remove key requirements, including selected sensors and ground systems, to keep the program within budget. DOD established its DWSS program office and modified its contracts accordingly before deciding in early 2012 to terminate the program and reassess its requirements (as directed by Congress). NOAA Established the JPSS Program and Contracts for Most Components, but Plans to Modify Requirements to Limit Costs After the February 2010 decision to disband NPOESS, NOAA transferred management responsibilities to its new satellite program, defined its requirements, and transferred contracts to the new program. Specifically, NOAA established a program office to guide the development of the NPP and JPSS satellites. NOAA also worked with NASA to establish its program office to oversee the acquisition, system engineering, and integration of the satellite program. By 2011, the two agencies had established separate—but colocated—JPSS program offices, each with different roles and responsibilities. NOAA’s program office is responsible for programmatic activities related to the satellites’ development, including managing requirements, budgets, and interactions with satellite data users. Alternatively, NASA’s program office is responsible for the development and integration of the sensors, satellites, and ground systems. In January 2012, both agencies approved a management control plan that delineates the two agencies’ roles, responsibilities, and executive oversight structure. In September 2011, NOAA established its official requirements document for the JPSS program. This document defines the components of the program as well as the expected performance of the satellites and ground systems. Key components include NPP, the two JPSS satellites, the five sensors, a distributed ground-based network of satellite data receptor sites, and four ground-based data processing systems. This system is to deliver 31 satellite data products within 80 minutes of observation on the first satellite and within 30 minutes on the second satellite. Over the 2 years since the decision to disband NPOESS, NOAA has also been working to transfer and refine the contracts for four of the sensors that are to be launched on the first JPSS satellite from the Air Force to NASA. The program completed the transfer of all of the contracts by September 2011 and then began the process of updating the contracts to match JPSS’ requirements. This process has been completed for three sensors (CrIS, OMPS, and ATMS). Program officials expect to finalize changes to the contract for the last sensor (VIIRS) in June 2012. While NOAA and NASA have made progress in transferring management and contract responsibilities from NPOESS to the JPSS program, NOAA recently decided to modify its requirements in order to limit program costs. From January to December 2011, the agency went through a cost estimating exercise for the JPSS program. This exercise included identifying key program elements, documenting assumptions, performing historical and parametric analysis to determine reasonable estimates for the elements, seeking an independent cost estimate, and reconciling the two estimates. At the end of this exercise, NOAA validated that the cost of the full set of JPSS functions from fiscal year 2012 through fiscal year 2028 would be $11.3 billion. After adding the agency’s sunk costs of $3.3 billion, the program’s life cycle cost estimate totaled $14.6 billion. amount is $2.7 billion higher than the $11.9 billion estimate for JPSS when NPOESS was disbanded in 2010. According to NOAA officials, this increase is primarily due to a 4-year extension of the program from 2024 to 2028, the addition of previously unbudgeted items such as the free flyers, cost growth associated with transitioning contracts from DOD to NOAA, and the program’s decision to slow down work on lower-priority elements because of budget constraints in 2011. NOAA’s $3.3 billion sunk costs included $2.9 billion through fiscal year 2010 and about $400 million in fiscal year 2011. In working with the Office of Management and Budget to establish the president’s fiscal year 2013 budget request, NOAA officials stated that they agreed to fund JPSS at roughly $900 million per year through 2017, to merge funding for two climate sensors into the JPSS budget, and to cap the JPSS life cycle cost at $12.9 billion through 2028. Because this cap is $1.7 billion below the expected $14.6 billion life cycle cost of the full program, NOAA decided to remove selected elements from the satellite program. While final decisions on what will be removed are expected by the end of June 2012, NOAA may discontinue: support for OMPS operations on JPSS-1; development of two of the three planned Total and Spectral Solar Irradiance Sensors, the spacecraft for all three of these sensors, and the launch vehicle for the three sensors; development of the OMPS and CERES sensors on JPSS-2; plans for a network of ground-based receptor stations; planned improvements in the time it takes to obtain satellite data from JPSS-2 (the requirement was to provide data in 30 minutes; instead, the requirement will remain at the JPSS-1 level of 80 minutes); plans to install an Interface Data Processing Segment (IDPS) at two plans to support ground operations for DOD’s future polar satellite program. NOAA anticipates modifying its official requirements documents to reflect these changes by the end of 2012. The removal of these elements will affect both civilian and military satellite data users. The loss of OMPS and CERES satellite data could cause a break in the over 30-year history of satellite data and would hinder the efforts of climatologists and meteorologists focusing on understanding changes in the earth’s ozone coverage and radiation budget. The loss of ground-based receptor stations means that NOAA may not be able to improve the timeliness of JPSS-2 satellite data from 80 minutes to the current 30 minute requirement, and as a result, weather forecasters will not be able to update their weather models using the most recent satellite observations. Further, the loss of the data processing systems at the two Navy locations means that NOAA and the Navy will need to establish an alternative way to provide data to the Navy. DOD Established and Subsequently Terminated Its DWSS Program After the February 2010 decision to disband NPOESS, DOD transferred management responsibilities to its new satellite program, started defining its requirements, and modified contracts to reflect the new program. Specifically, in 2010, DOD established a DWSS program office and started developing plans for what the satellite program would entail. The DWSS program office, located at the Space and Missile Systems Center in Los Angeles, California, was given responsibility for the acquisition, development, integration, and launch of the DWSS satellites. Because this is considered a major acquisition, it is overseen by the Defense Acquisition Board and the Under Secretary of Defense for Acquisition, Technology, and Logistics. In August 2010, the agency determined that the DWSS program would include two satellites and that each satellite would host three sensors. Over the following year, the program office developed a program plan and a technical description, and planned to define requirements in early 2012. Further, the agency started modifying its existing contracts with the NPOESS contractor to reflect the new program. By May 2011, the program office had contracted for DWSS activities through the end of 2012. These efforts, however, have been halted. In early 2012, with congressional direction, DOD decided to terminate the DWSS program because it still has two DMSP satellites to launch and it did not yet need the DWSS satellites. In January 2012, the Air Force halted work on the program. DOD is currently identifying alternative means to fulfill its future environmental satellite requirements. NPP Is in Orbit and Transmitting Data; Development of the First JPSS Satellite Has Begun, but Critical Steps Remain In September 2010, shortly after NPOESS was disbanded, NOAA and NASA established plans for both NPP and JPSS. These plans included launching NPP by the end of October 2011 and completing an early on-orbit check out of the NPP spacecraft and sensors (called commissioning) by the end of January 2012; completing all NPP calibration and validation activities2013; and developing, testing, and launching JPSS-1 by the end of 2014 and JPSS-2 by the end of 2017. Program officials currently estimate that JPSS-1 will launch by March 2017 and JPSS-2 will launch by December 2022. NOAA officials explained that part of the reason for the change in launch dates is that the program’s budget under the 2011 continuing resolution was only one third of what NOAA had anticipated. Thus, program officials decided to defer development of the first JPSS satellite in order to keep NPP on track. NOAA officials noted that the JPSS launch dates could change as the agency finalized its program planning activities. NPP Is in Orbit; Sensor Data Are Being Calibrated for Use NPP was successfully launched on October 28, 2011. After launch, NASA began the process of activating the satellite and commissioning the instruments. This process ended at the beginning of March 2012, which was a little over a month after the planned completion date at the end of January 2012. The delay was caused by an issue on the VIIRS instrument that caused the program to halt commissioning activities in order to diagnose the problem. Specifically, the quality of VIIRS data in certain bands was degrading much more quickly than expected. NASA and the JPSS program office subsequently identified the problem as contamination on VIIRS mirrors. NOAA and NASA program officials, including the JPSS director and project manager, reported that this issue is not expected to cause the instrument to fall below its performance specifications. Figure 4 depicts an image of Earth using VIIRS data from NPP. Program officials are working to complete NPP calibration and validation activities by October 2013, but they acknowledge that they may encounter delays in developing satellite products. NOAA is receiving data from the five sensors on the NPP satellite, and has begun calibration and validation. According to NOAA and NASA officials, during this time, the products go through various levels of validation, including a beta stage (products have been minimally validated, but are available to users so that they can begin working with the data); a provisional stage (products are not optimal, but are ready for operational evaluation by users); and a validated stage (products are ready for operational use). The amount of time it takes for a product to be fully validated depends on the sensor and the type of product. For example, NOAA provided a provisional ozone environmental data record from the OMPS sensor in April 2012 and expects to provide three beta environmental data records from the CrIS sensor by October 2012. NOAA’s users began to use validated ATMS products in May 2012, and NOAA expects that they will increase the amount and types of data they use in the following months. Development of JPSS Is Under Way; Critical Decisions and Milestones Are Pending The major components of the JPSS program are at different stages of development, and important decisions and program milestones lie ahead. NASA’s JPSS program office organized its responsibilities into three separate projects: (1) the flight project, which includes sensors, spacecraft, and launch vehicles; (2) the ground project, which includes ground-based data processing and command and control systems, and (3) the free-flyer project, which involves developing and launching the instruments that are not going to be included on the JPSS satellites. Table 3 shows the three JPSS projects and their key components. Within the flight project, development of the sensors for the first JPSS satellite is well under way; however, selected sensors are experiencing technical issues and the impact of these issues had not yet been determined. The ground project is currently in operation supporting NPP, and NOAA is planning to upgrade selected parts of the ground systems to increase security and reliability. The free-flyer project is still in a planning stage because NOAA has not yet decided which satellites will host the instruments or when these satellites will launch. One of these projects has recently completed a major milestone and one project has its next milestone approaching. Specifically, the flight project completed a separate system requirements review in April 2012, while the ground project’s system requirements review is scheduled for August 2012. Because development of the sensors for JPSS-1 began during the NPOESS era, NASA estimates that as of March 2012, all of the sensors have been at least 60 percent completed. However, selected sensors are encountering technical issues and the full impact of these issues on cost and schedule has not been determined. Further, the program has not yet made a decision on which launch vehicle will be used. NASA and NOAA officials reported that the technical issues thus far are routine in nature, and that they plan to select a launch vehicle by the end of 2012. Table 4 describes the current status of the components of the JPSS-1 flight project. While NOAA ground systems for satellite command, control, and communications and for data processing are currently supporting NPP operations, the agency plans to upgrade the ground systems to improve their availability and reliability. In 2010, we reported that NPP’s ground systems had weaknesses because they were developed using outdated security requirements approved in 1998. These weaknesses were highlighted soon after NPP was launched, when the communications links providing satellite data from the satellite receiver in Svalbard, Norway, to the United States were severed. NOAA immediately established a temporary backup capability, and plans to upgrade its communications systems to establish permanent backup capabilities by the end of 2012. In addition, NOAA plans to enhance the backup capabilities of its data processing system infrastructure by November 2015. The instruments in the free flyer project, including the Total and Spectral Solar Irradiance Sensor and two user services systems (the Search and Rescue Satellite-Aided Tracking system and an Advanced Data Collection system), are currently under development. However, in early 2012, NOAA decided to consider not launching the Total and Spectral Solar Irradiance Sensor as an option for staying within its budget cap. Moreover, the agency is still considering its options for the spacecraft that will carry the other two instruments to space. For example, it is considering contracting for a spacecraft or having the instruments hosted on some other organization’s satellite. Table 5 depicts the status of the components of the free-flyer project. JPSS Risk Management Process in Place; Key Risks Remain The JPSS program has a structured risk management process in place and is working to mitigate key program risks; however, NOAA faces key risks involving the potential for satellite gaps and does not yet have mitigation plans. According to best practices advocated by leading system engineering and program management organizations, effective risk management addresses four key areas: preparing for risk management, identifying and analyzing risks, mitigating risks, and providing executive oversight.effective risk management process. Specifically, the program documented its risk management strategy; identified relevant stakeholders and designated responsibilities for risk management activities; established and implemented standards for categorizing and prioritizing risks; instituted a program to identify, track, and mitigate risks; and established a process for regularly communicating risks to senior NASA and NOAA management. The JPSS program office has implemented elements of an The JPSS program is working to mitigate the risks of a lack of a cost and schedule baseline and program office staffing shortfalls, but NOAA has not established mitigation plans to address the risk of a gap in the afternoon orbit or potential satellite data gaps in the DOD and European polar satellite programs, which provide supplementary information to NOAA forecasts. Because it could take time to adapt grounds systems to receive alternative satellites’ data, delays in establishing mitigation plans could leave the agency little time to leverage its alternatives. Until NOAA identifies its mitigation options, it may miss opportunities to leverage alternative satellite data sources. Moreover, until NOAA establishes mitigation plans for a satellite data gap, it runs the risk of not being able to fulfill its mission of providing weather forecasts to protect lives, property, and commerce. NOAA Is Working to Mitigate Delays in Establishing Cost and Schedule Baselines NOAA oversaw the establishment of contracts for the JPSS-1 sensors and spacecraft and NASA is managing the cost, schedule, and deliverables on these contracts using discrete task orders, but the agencies have not established a contractual cost and schedule baseline that would allow them to monitor contractor deliverables within an earned value management system.established an overall program baseline that delineates the cost, schedule, and content of the entire program. Under NASA’s acquisition life cycle, a program baseline is due at the key decision milestone In addition, program officials have not yet scheduled to be completed by July 2013. Managing a program without a baseline makes it more difficult for program officials to make informed decisions and for program overseers to understand if the program is on track to successfully deliver expected functionality on cost and schedule. Program officials acknowledge that the lack of a baseline is a risk, and they are tracking it through their risk management program. Program officials explained that after transferring the contracts from the Air Force to NASA, they needed to definitize the contracts to reflect JPSS program requirements instead of NPOESS program requirements. The JPSS program office has completed this process for three sensors (CrIS, OMPS, and ATMS) and is working to complete the process for one other sensor (VIIRS) by June 2012. After definitizing each contract to JPSS requirements and schedules, NASA and the contractors will perform an integrated baseline review before implementing an earned value management system. NOAA officials reported that they are working to establish contractual baselines as rapidly as practical for each of the contracts. Program officials also plan to establish an overall program baseline. Actions planned to mitigate this risk include establishing a stable and realistic 5-year budget profile, which was completed in December 2011; refining the program requirements to match the expected budget by October 2012; definitizing contracts to address any changes in requirements in establishing the overall program baseline by the end of November 2012. NOAA Is Working to Mitigate Risks in Program Staffing NOAA and NASA have not yet fully staffed their respective JPSS program offices. While having a knowledgeable and capable program management staff is essential to any acquisition program, it is especially critical given the history of management challenges on the NPOESS program. However, NOAA has not yet filled 18 of the 64 positions it plans for the program office, including those for a program scientist and system engineers for the JPSS satellite, ground systems, and overall mission. In addition, NASA has not yet filled 6 positions it plans for its ground project. Until these positions are filled, other staff members are supporting the workload and this could delay the schedule for implementing improvements in the ground systems. Both agencies are actively tracking their respective program offices’ staffing and plans for filling vacancies. According to NOAA officials, the agency is mitigating this risk by filling three of the vacant positions with long-term detailees. Further, NOAA plans to fill most of the positions, including that of the technical director, by July 2012. NASA has started the process to fill its vacancies, and plans to fill these by the end of September 2012. NOAA Has Not Established Plans to Mitigate an Expected Gap in Satellite Data Continuity In September 2011, we reported that NOAA was facing a gap in satellite data continuity; the risk of that gap is higher today. When NPOESS was first disbanded, program officials anticipated launching the JPSS satellites in 2015 and 2018 (while acknowledging that these dates could change as the program’s plans were firmed up). Over the past year, as program officials made critical decisions to defer work on JPSS in order to keep NPP on track, the launch dates for JPSS-1 and JPSS-2 have changed. Program officials currently estimate that JPSS-1 will be launched by March 2017 and JPSS-2 will be launched by December 2022. NOAA officials acknowledge that there is a substantial risk of a gap in satellite data in the afternoon orbit, between the time when the NPP satellite is expected to reach the end of its life and the time when the JPSS-1 satellite is to be in orbit and operational. This gap could span from 17 months to 3 years or more. In one scenario, NPP would last its full expected 5-year life (to October 2016), and JPSS-1 would launch as soon as possible (in March 2017) and undergo on-orbit checkout for a year (until March 2018). In that case, the data gap would extend 17 months. In another scenario, NPP would last only 3 years as noted by NASA managers concerned with the workmanship of selected NPP sensors. Assuming that the JPSS-1 launch occurred in March 2017 and the satellite data was certified for official use by March 2018, this gap would extend for 41 months. Of course, any problems with JPSS-1 development could delay the launch date and extend the gap period. Given the history of technical issues and delays in the development of the NPP sensors and the current technical issues on the sensors, it is likely that the launch of JPSS-1 will be delayed. Figure 5 depicts four possible gap scenarios. According to NOAA, a data gap would lead to less accurate and timely weather prediction models used to support weather forecasting, and advanced warning of extreme events—such as hurricanes, storm surges, and floods—would be diminished. To illustrate this, the National Weather Service performed several case studies to demonstrate how its weather forecasts would have been affected if there were no polar satellite data in the afternoon orbit. For example, when the polar satellite data were not used to predict the “Snowmaggedon” winter storm that hit the Mid-Atlantic coast in February 2010, weather forecasts predicted a less intense storm, slightly further east, and producing half of the precipitation at 3, 4, and 5 days before the event. Specifically, weather prediction models under- forecasted the amount of snow by at least 10 inches. The agency noted that this level of degradation in weather forecasts could place lives, property, and critical infrastructure in danger. NOAA officials have communicated publicly and often about the risk of a satellite data gap; however, the agency has not established plans to mitigate the gap. NOAA officials stated that the agency will continue to use existing POES satellites, as well as NPP, as long as they provide data and that there are no viable alternatives to the JPSS program. However, it is possible that other governmental, commercial, or international satellites could supplement the data. If there are viable options for obtaining data from external sources, it would take time to adapt NOAA systems to receive, process, and disseminate the data. Until NOAA identifies these options, it may miss opportunities to leverage these satellite data sources. NOAA Has Not Established Plans to Mitigate the Risk That the Polar Satellite Constellation Is Becoming Increasingly Unreliable Since its inception, NPOESS was seen as a constellation of satellites providing observations in the early morning, midmorning, and afternoon orbits. Having satellites in each of these orbits ensures that satellite observations covering the entire globe are no more than 6 hours old, thereby allowing for more accurate weather predictions. Even after the program was restructured in 2006 and eventually terminated in 2010, program officials and the administration planned to ensure coverage in the early morning, midmorning, and afternoon orbits by relying on DOD satellites for the early morning orbit, the European satellite program for the midmorning, and NOAA’s JPSS program for the afternoon orbit. However, recent events have made the future of this constellation uncertain: Early morning orbit—As discussed earlier in this report, in early fiscal year 2012, DOD terminated its DWSS program. While the agency has two more satellites to launch and is working to develop alternative plans for a follow-on satellite program, there are considerable challenges in ensuring that a new program is in place and integrated with existing ground systems and data networks in time to avoid a gap in this orbit. DOD officials stated that they plan to launch DMSP-19 in 2014 and DMSP-20 when it is needed. If DMSP-19 lasts 6 years, there is a chance that DMSP will not be launched until 2020. Thus, in a best- case scenario, the follow-on satellites will not need to be launched until roughly 2026. However, civilian and military satellite experts have expressed concern that the DMSP satellites are quite old and may not work as intended. If they do not perform well, DOD could be facing a satellite data gap in the early morning orbit as early as 2014. Midmorning orbit—The European satellite organization plans to continue to launch MetOp satellites that will provide observations in the midmorning orbit through October 2021. The organization is also working to define and gain support for the follow-on program, called the Eumetsat Polar System-2nd Generation program. However, in 2011, NOAA alerted European officials that, because of the constrained budgetary environment, they will no longer be able to provide sensors for the follow-on program. Due to the uncertainty surrounding the program, there is a chance that the first European follow-on satellite will not be ready in time to replace MetOp at the end of its expected life. In that case, this orbit, too, would be in jeopardy. Afternoon orbit—As discussed previously, there is likely to be a gap in satellite observations in the afternoon orbit that could last well over one year. While our scenarios demonstrated gaps lasting between 17 and 53 months, NOAA program officials believe that the most likely scenario involves a gap lasting 18 to 24 months. Figure 6 depicts the polar satellite constellation and the uncertain future coverage in selected orbits. The NOAA Administrator and other senior executives acknowledge the risk of a data gap in each of the orbits of the polar satellite constellation and are working with European and DOD counterparts to coordinate their respective requirements and plans; however, they have not established plans for mitigating risks to the polar satellite constellation. As in the case of the anticipated gap in the afternoon orbit, NOAA plans to use older polar satellites to provide some of the necessary data for the other orbits. However, it is also possible that other governmental, commercial, or international satellites could supplement the data. For example, foreign nations continue to launch polar-orbiting weather satellites to acquire data such as sea surface temperatures, sea surface winds, and water vapor. Also, over the next few years, NASA plans to launch satellites that will collect information on precipitation and soil moisture. If there are viable options from external sources, it could take time to adapt NOAA systems to receive, process, and disseminate the data to its satellite data users. Until NOAA identifies these options and establishes mitigation plans, it may miss opportunities to leverage alternative satellite data sources. Conclusions After spending about $3.3 billion on the now-defunct NPOESS program, NOAA officials have established a $12.9-billion JPSS program and made progress in launching NPP, establishing contracts for the first JPSS satellite, and enhancing the ground systems controlling the satellites and processing the satellite data. JPSS program officials are currently working to calibrate NPP data so that they are useable by civilian and military meteorologists and to manage the development of sensors for the first JPSS satellite. In coming months, program officials face changing requirements, technical issues on individual sensors, key milestones in developing the JPSS satellite, and important decisions on how to accommodate instruments that are not included on the JPSS satellite. While the JPSS program office is working to mitigate risks associated with not having a program baseline or a fully staffed program management office, NOAA has not established plans to mitigate the almost certain satellite data gaps in the afternoon orbit or the potential gaps in the early and mid-morning orbits. These gaps will likely affect the accuracy and timeliness of weather predictions and forecasts and could affect lives, property, military operations, and commerce. Because it could take time to adapt ground systems to receive an alternative satellite’s data, delays in establishing mitigation plans could leave the agency little time to leverage alternatives. Until NOAA identifies its mitigation options, it may miss opportunities to leverage alternative satellite data sources. Recommendations for Executive Action Given the importance of polar-orbiting satellite data to weather forecasts, we recommend that the Secretary of Commerce direct the Administrator of NOAA to establish mitigation plans for risks associated with pending satellite data gaps in the afternoon orbit as well as potential gaps in the early morning and midmorning orbits. Agency Comments and Our Evaluation We sought comments on a draft of our report from the Department of Commerce, DOD, and NASA. We received written comments from the Secretary of Commerce, who transmitted NOAA’s comments. In its comments, NOAA agreed with the report’s recommendation and noted that the National Environmental Satellite, Data, and Information Service— a NOAA component agency—has performed analyses on how to mitigate potential gaps in satellite data, but has not yet compiled this information into a report. The agency plans to provide a report to NOAA by August 2012. The department’s comments are provided in appendix II. The department also provided technical comments, which we incorporated as appropriate. While neither DOD nor NASA provided comments on the report’s findings or recommendations, they offered technical comments, which we incorporated as appropriate. Specifically, the Staff Action Officer for the Space and Intelligence Office within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics provided technical comments both orally and via e-mail, and a commander within the Navy’s Oceanographer staff provided oral technical comments. In addition, the Project Manager of the JPSS flight project—a NASA employee—provided technical comments via e-mail. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, the Secretary of Defense, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) evaluate efforts to transfer management and contract responsibilities from the National Polar-orbiting Operational Environmental Satellite System (NPOESS) program to the separate satellite programs being established at the National Oceanic and Atmospheric Administration (NOAA) and Department of Defense (DOD), (2) assess NOAA’s progress in developing the NPOESS Preparatory Project (NPP) satellite and the Joint Polar Satellite System (JPSS), and (3) evaluate NOAA’s efforts to mitigate key project risks. To evaluate efforts to transfer responsibilities from NPOESS to the separate NOAA and DOD programs, we compared the agencies’ plans for establishing program management offices, developing program requirements, and transferring contracts to each agency’s actual accomplishments. We analyzed key program documents, including acquisition decision memorandums, requirements documents, and the management control plan. We observed NOAA’s monthly program management briefings and obtained detailed briefings on efforts to establish a program cost estimate, NOAA’s fiscal year 2013 budget for JPSS, and decisions to remove selected program elements. To assess the reliability of the program’s cost estimate, we compared agency documentation of the program office estimate and the independent cost estimate, and interviewed program officials and cost estimators to understand key aspects of and differences between the estimates. We determined that the estimates were sufficient for our purposes of providing summary data. We interviewed program officials from NOAA, DOD, and the National Aeronautics and Space Administration (NASA), to obtain information on transition schedules, progress, program requirements, and challenges. To assess NOAA’s progress in developing the NPP and JPSS satellite systems, we compared NOAA’s plans for key milestones to its actual accomplishments. We reviewed monthly progress reports, draft program schedules, and the NPP operational readiness review package. We observed NOAA’s monthly program management briefings to determine the status of key components. We interviewed both agency and contractor officials, including officials at Ball Aerospace, Inc. and Raytheon Space and Airborne Systems, Inc. We also interviewed key NOAA satellite data users, including officials involved in weather forecasting and numerical weather prediction, to identify their experiences in working with NPP data as well as their plans for working with JPSS data. To evaluate NOAA’s efforts to mitigate key project risks, we compared the agency’s risk management process to best practices in risk management as identified by the Software Engineering Institute. We reviewed NOAA’s program risk lists on a monthly basis to obtain insights into management issues and actions. We interviewed agency and contractor officials to evaluate actions to address each transition risk. In addition, we interviewed NOAA satellite data users to determine the impact of any changes in requirements. We performed our work at NASA, NOAA, and DOD offices in the Washington, D.C., area and at contractor facilities in Los Angeles, California; Aurora, Colorado; and Boulder, Colorado. We conducted this performance audit from May 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments by the Department of Commerce Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Colleen Phillips (Assistant Director), Kathleen Lovett Epperson, Kate Feild, Nancy Glover, Franklin Jackson, and Fatima Jahan made key contributions to this report. Related GAO Products NASA: Assessments of Selected Large-Scale Projects. GAO-12-207SP. (Washington, D.C.: Mar. 1, 2012). Polar Satellites: Agencies Need to Address Potential Gaps in Weather and Climate Data Coverage. GAO-11-945T. (Washington, D.C.: Sept. 23, 2011). Polar-Orbiting Environmental Satellites: Agencies Must Act Quickly to Address Risks That Jeopardize the Continuity of Weather and Climate Data. GAO-10-558. (Washington, D.C.: May 27, 2010). Polar-Orbiting Environmental Satellites: With Costs Increasing and Data Continuity at Risk, Improvements Needed in Tri-agency Decision Making. GAO-09-772T. (Washington, D.C.: June 17, 2009). Polar-orbiting Environmental Satellites: With Costs Increasing and Data Continuity at Risk, Improvements Needed in Tri-agency Decision Making. GAO-09-564 (Washington, D.C.: June 17, 2009). Environmental Satellites: Polar-orbiting Satellite Acquisition Faces Delays; Decisions Needed on Whether and How to Ensure Climate Data Continuity. GAO-08-899T. (Washington, D.C.: June 19, 2008). Environmental Satellites: Polar-orbiting Satellite Acquisition Faces Delays; Decisions Needed on Whether and How to Ensure Climate Data Continuity. GAO-08-518. (Washington, D.C.: May 16, 2008). Environmental Satellite Acquisitions: Progress and Challenges. GAO-07-1099T. (Washington, D.C.: July 11, 2007). Polar-orbiting Operational Environmental Satellites: Restructuring Is Under Way, but Challenges and Risks Remain. GAO-07-910T. (Washington, D.C.: June 7, 2007). Polar-orbiting Operational Environmental Satellites: Restructuring Is Under Way, but Technical Challenges and Risks Remain. GAO-07-498. (Washington, D.C.: Apr. 27, 2007). Polar-orbiting Operational Environmental Satellites: Cost Increases Trigger Review and Place Program’s Direction on Hold. GAO-06-573T. (Washington, D.C.: Mar. 30, 2006). Polar-orbiting Operational Environmental Satellites: Technical Problems, Cost Increases, and Schedule Delays Trigger Need for Difficult Trade-off Decisions. GAO-06-249T. (Washington, D.C.: Nov. 16, 2005). Polar-orbiting Environmental Satellites: Information on Program Cost and Schedule Changes. GAO-04-1054. (Washington, D.C.: Sept. 30, 2004). Polar-orbiting Environmental Satellites: Project Risks Could Affect Weather Data Needed by Civilian and Military Users. GAO-03-987T. (Washington, D.C.: July 15, 2003). Polar-orbiting Environmental Satellites: Status, Plans, and Future Data Management Challenges. GAO-02-684T. (Washington, D.C.: July 24, 2002).
Environmental satellites provide critical data used in forecasting weather and measuring variations in climate over time. NPOESS—a program managed by NOAA, DOD, and the National Aeronautics and Space Administration—was planned to replace two existing polar-orbiting environmental satellite systems. However, 8 years after a development contract for the NPOESS program was awarded in 2002, the cost estimate had more than doubled—to about $15 billion, launch dates had been delayed by over 5 years, significant functionality had been removed from the program, and the program’s tri-agency management structure had proven to be ineffective. In February 2010, a presidential task force decided to disband NPOESS and, instead, to have NOAA and DOD undertake separate acquisitions. GAO was asked to evaluate (1) efforts to transfer responsibilities from the NPOESS program to the separate NOAA and DOD programs, (2) NOAA’s progress in developing its satellite system, and (3) NOAA’s efforts to mitigate key project risks. To do so, GAO analyzed program management, contract, cost, and risk data, attended executive program reviews, and interviewed agency and contractor officials. Following the decision to disband the National Polar-orbiting Operational Environmental Satellite System (NPOESS) program in 2010, both the National Oceanic and Atmospheric Administration (NOAA) and the Department of Defense (DOD) made initial progress in transferring key management responsibilities to their separate program offices. Specifically, NOAA established a Joint Polar Satellite System (JPSS) program office, documented its requirements, and transferred existing contracts for earth-observing sensors to the new program. DOD established its Defense Weather Satellite System program office and modified contracts accordingly. However, recent events have resulted in major program changes at both agencies. NOAA plans to revise its program requirements to remove key elements, including sensors and ground-based data processing systems, to keep the program within budget. Further, in early 2012, DOD decided to terminate its program and reassess its requirements. Over the past year, NOAA has made progress in developing its satellite system, but critical decisions and milestones lie ahead. In October 2011, the JPSS program office successfully launched a satellite originally called the NPOESS Preparatory Project (NPP). Data from the satellite are currently being calibrated and validated, and NOAA meteorologists started using selected satellite data products in their weather forecasts in May 2012. Further, the three major components of the JPSS program (the flight, ground, and free-flyer projects) are at different stages of development. Within the flight project, development of the sensors for the first JPSS satellite is well under way; however, selected sensors are experiencing technical issues. The ground project is currently in operation supporting NPP and NOAA is planning to upgrade parts of the ground system infrastructure to increase its security and reliability. The free-flyer project, intended to integrate and launch key instruments that could not be accommodated on the JPSS satellites, is still in a planning stage because NOAA has not yet decided which satellites will host the instruments or when these satellites will launch. The JPSS program office has implemented elements of an effective risk management process; however, the program still faces significant risks. It does not yet have a cost and schedule baseline in place, the program office is not yet fully staffed, and there will likely be a gap in satellite data lasting 17 to 53 months from the time NPP is projected to cease operations and the first JPSS satellite begins to operate. There are also potential satellite data gaps in the DOD and European polar satellite programs, which provide supplementary information to NOAA forecasts. The JPSS program office is managing the first two risks, but NOAA has not established plans to mitigate potential satellite gaps. Until these risks are mitigated and resolved, civilian and military satellite data users may not have the information they need for timely weather forecasting, thereby risking lives, property, and commerce.
Background FACNET is a governmentwide systems architecture for acquisition based on electronic data interchange (EDI), which is the computer-to-computer exchange of routine business documents using standardized data formats. A key goal of FACNET and the governmentwide EC program is to present a “single face to industry”—making the government look like a single entity rather than a collection of independent departments and agencies. The EC program aims to simplify and standardize doing business by eliminating the need to deal with numerous different agencies’ procurement processes, forms, and rules. FASA requires that agencies award at least 75 percent of eligible contracts through a system that has implemented all the FACNET functions.Contracting offices in agencies that are not in compliance by December 31, 1999, will lose their authority to use simplified acquisition procedures for contracts exceeding $50,000. Agencies can exclude certain contract actions when calculating the percentage of FACNET use: The Federal Acquisition Regulatory (FAR) Council has determined that certain contract actions, such as delivery orders, task orders, and in-scope modifications against established contracts, should not be considered when determining agency compliance with FASA’s 75-percent criterion.Essentially, this means that only contract awards should be counted. FASA authorizes the head of each executive agency to exempt its procuring activities (or portions thereof) from the requirement to implement full FACNET capability based on a determination that such implementation is not cost-effective or practicable. Contracts awarded by exempted activities are not to be considered in determining agency compliance with the criterion. FASA provides that the FAR Council may determine—after considering our report on this subject—that classes of contracts are not suitable for acquisition through a system with full FACNET capability. In our January 1997 report, we questioned the FASA-mandated approach to EC. Also, we observed that since passage of FASA, alternative electronic purchasing methods had become readily available to the government and its vendors. Given these alternative methods, the prescriptive requirements of FASA, and problems with implementation, we questioned the extent to which FACNET made good business sense for simplified acquisitions. We recommended that the executive branch (1) develop a coherent EC strategy and implementation approach incorporating the single-face-to-industry goal and (2) seek legislative changes, if FASA’s requirements for FACNET were an impediment to implementing the governmentwide EC strategy. Since then, the President’s Management Council has tasked a high-level management committee to review EC implementation and develop a more integrated federal EC strategy. The executive branch also recently proposed legislative changes to repeal mandated use of FACNET. On July 8, 1997, the Senate approved an amendment to the Fiscal Year 1998 National Defense Authorization Bill that would, among other things (1) repeal mandated use of FACNET; (2) define EC to include electronic mail or messaging, World Wide Web technology, electronic bulletin boards, purchase cards, electronic funds transfer, and EDI; and (3) ensure that any notices of agency requirements or solicitation for contract opportunities are provided in a form that allows convenient and universal user access through a single governmentwide point of entry. The amendment calls for uniformity in federal EC implementation to the maximum extent that is practicable. The House National Defense Authorization Bill did not contain any language regarding FACNET. Therefore, the Senate amendment will be addressed in conference. This report provides information that could be useful in congressional deliberations. Agencies Found Several Contract Types Not Suitable for FACNET Responses from 24 agencies indicated a general consensus among senior federal procurement officials that, for several types of contracts, the use of FACNET is inappropriate, impractical, or inefficient. As discussed below, the agencies provided clear, reasonable, and consistent reasons for excluding contracts from mandatory FACNET processing. Contracts for Which Widespread Public Solicitation Is Inappropriate A primary function of FACNET, mandated by FASA, is to enable agencies to provide widespread electronic public notice of solicitations for contracting opportunities. Several agencies’ senior procurement officials identified procurements for which nationwide public solicitation of offers was inappropriate or ineffective in filling requirements. These officials considered such procurements unsuitable for acquisition through FACNET. A major group of contracts in this category is procurements with “on-site” or local vendor requirements that generally require soliciting competition from vendors in a local area. Construction and building maintenance services contracts, for example, typically require a site visit to understand the work to be performed, examine conditions affecting the work, and accurately estimate the cost of performance. One agency cited its requirement for weed control services as a typical example of a contract that requires a local presence by vendors to bid intelligently and keep costs down for acceptable performance. Another agency noted that purchases of subsistence items are limited to the local commuting area because they require the buyer to view and select the commodities. According to procurement officials, nationwide solicitation through FACNET for procurements with “on-site” or local commuting area requirements was often inappropriate because vendors from outside the area could not reasonably be expected to be able to fulfill the requirement, resulting in additional costs to prepare and evaluate offers with no commensurate benefit to vendors or the agency. Widespread public notice may not be required or appropriate for other types of contracts. For example, agencies’ officials noted that awards to Small Disadvantaged Business Concerns, referred to as 8(a) contractors, do not use public solicitation of offers. Transmission of Essential Contract Information Is Impractical or Infeasible Senior procurement officials found FACNET unsuitable for numerous contracts because essential procurement information could not be sent, received, or communicated effectively through FACNET. In particular, senior procurement officials considered FACNET inappropriate for procurements requiring extensive government specifications, lengthy written or oral proposals, sensitive or classified information, technical evaluations, and urgent delivery or performance. For example, contracts for various services, such as research and development efforts, were cited frequently because of the lengthy statements of work and other attachments that were often required. According to procurement officials, transmitting such information through FACNET is difficult, costly, and often infeasible. The Department of Defense (DOD) is working with the National Technical Information Service to address this issue by developing methods for making drawings, specifications, and standards cited in government solicitations available electronically. According to DOD officials, early results of this program appear promising. Several procurement officials indicated that procurements involving sensitive information are not suitable for FACNET processing because, in their view, FACNET does not currently provide adequate security.Additionally, acquisitions that require urgent delivery of a service or material are not considered good candidates for FACNET because procurements that are conducted through FACNET would take too long to complete. Procurement officials also cited several types of acquisitions that could not be conducted through FACNET because vendors were not on FACNET. For example, for some procurements with on-site or local commuting area requirements, agencies’ officials indicated that there were few or no local vendors participating in FACNET. Several agencies’ officials considered FACNET impractical for overseas procurements, especially in developing countries because vendors did not have the technical sophistication or infrastructure to sell via FACNET. Likewise, some procurement officials said it is unreasonable to expect individuals and noncommercial organizations to be on FACNET since they are not set up to sell products or services on a frequent basis and investing in FACNET for a few orders was not considered cost-effective. Alternative Purchasing Methods Are More Efficient Procurement officials from several agencies considered FACNET unsuitable if they found other purchasing methods or EC technologies to be more efficient, less costly, and easier to use. For example, procurement officials noted a number of alternative purchasing methods they considered more economical and easier to use than FACNET. These methods included purchase card procurements and orders placed against electronic catalogs and existing contracts, such as General Services Administration (GSA) supply schedules, GSA Advantage, and governmentwide indefinite delivery/indefinite quantity contracts. Several procurement officials stated that the Internet and Web-based technologies are better EC options than FACNET because they are easier to access, have fewer technical limitations, and are relatively inexpensive for agencies and contractors to implement and use. Procurement officials also said that it was generally less expensive and quicker to purchase commercial products and services valued under $25,000 locally using traditional oral and paper-based solicitation methods rather than FACNET when sufficient competition was available. One official noted that for actions between $2,500 and $10,000, which do not require “posting” notices of solicitations, contracting offices without FACNET capability processed these procurements in a few days using locally available suppliers with good past performance records; FACNET capable sites were not able to meet their customers’ needs in the same time frames. Little Information Reported on Agencies’ EC Transactions Available data shows limited and declining use of FACNET for contract awards. However, the lack of governmentwide data on agencies’ use of other EC purchasing methods, such as purchase cards and orders placed electronically against catalogs and indefinite delivery/indefinite quantity contracts, impedes efforts to assess the government’s progress in moving toward doing business electronically and achieving the “single face to industry” goal. In our earlier report, we estimated that in 1995, less than 2 percent of about 2 million federal procurement actions above the micro-purchase threshold and below the simplified acquisition threshold ($2,501 and $100,000) were made through FACNET. The most recent and readily available data from the EC Program Office (January through June 1997) indicated that the volume of procurement actions processed through FACNET had declined, when compared to the same period a year earlier. On December 23, 1996, the Administrator for Federal Procurement Policy notified agency senior procurement executives that the EC concept for procurement had been broadened to include orders placed electronically against electronic catalogs and indefinite delivery/indefinite quantity contracts, purchase cards use, FACNET transactions, and Web-based contacting actions and requested agencies to report monthly the number and dollar value of EC transactions in each of these categories. The Administrator’s memorandum stated that it is difficult to measure the impact of EC without adequate data. Also, vendors have pointed out that they need information on the volume and value of federal EC purchases to determine whether there are sufficient business opportunities to justify the investments needed to participate. The limited governmentwide EC statistics are posted for public access on the Internet. As of July 25, 1997, only 6 of 21 agencies had submitted monthly statistics on the number and value of their agencies’ FACNET solicitations and orders for January through June 1997. Five of these six agencies also reported data on other EC procurements. Four of the 21 agencies had not reported EC statistics. Thus, no governmentwide data is currently available on the volume and value of all EC procurements. Both GSA and the Office of Federal Procurement Policy commented that they are modifying the Federal Procurement Data System to collect EC statistical information and it may not be available until the year 2000. Conclusions Senior procurement executives identified several classes of contracts not suitable for FACNET. These were contracts where (1) widespread solicitation is inappropriate, (2) transmission of essential information through FACNET is impracticable or infeasible, or (3) alternative procurement methods, including other EC methods, are more efficient. They provided clear, reasonable, and consistent business and technical reasons to support their positions. The most recent available data shows that FACNET use has declined since last year. However, comprehensive information is not available on the volume and value of other federal EC procurements. We believe the information in this report further supports our earlier work that showed the government needs the flexibility to implement EC technologies and purchasing methods that make good business sense and are aligned with commercial applications. It also supports our earlier recommendation to develop a coherent and integrated federal strategy and implementation approach for using various EC technologies and purchasing methods to meet agencies’ acquisition needs. That strategy and implementation approach remain to be completed. Scope and Methodology To address our objectives, we asked the Senior Procurement Executives at 25 federal agencies to give us information and explanations about (1) contracts they identified as not suitable for acquisition through FACNET and (2) the extent to which their agencies’ competitive contract awards between $2,500 and $100,000 were generally suitable for acquisition through a system with full FACNET capability. We received responses from 24 agencies. We also interviewed senior officials at the Office of Federal Procurement Policy, DOD, and GSA responsible for the governmentwide EC program. In addition, to assess federal agencies’ use of FACNET and other EC purchasing methods, we reviewed data on FACNET transactions from DOD’s Life-Cycle Information Integration Office and EC statistics submitted by federal agencies to the EC Program Office in GSA from January 1996 through June 1997. We did not independently verify the data. We performed our work between January and August 1997 in accordance with generally accepted government auditing standards. Agency Comments In commenting on a draft of this report, DOD, the National Aeronautics and Space Administration (NASA), GSA, and the Office of Federal Procurement Policy generally agreed with our findings. DOD stated that FACNET use will continue, even if a current congressional amendment repeals its mandated use. DOD noted that it is incorporating EC into its business practices within the simplified acquisition threshold. DOD believes agencies’ procurement officials should be allowed to determine for their agencies those classes of contracts not suitable for FACNET. NASA stated that it advocates a strategy that recognizes the variety of users, situations, and transaction types and moves to match them with the appropriate EC technology. NASA noted that the challenge remains to offer alternatives to agencies and their end users that provide attractive and cost-effective reasons for moving forward from a non-EC environment. NASA stated that it is working with other agencies committed to developing a coherent strategy and implementation approach that takes advantage of the Internet, including moving toward a “single face” environment for advertising business opportunities. Both GSA and the Office of Federal Procurement Policy expressed concern about the lack of comprehensive EC data. GSA stated that for the past 2 years the Electronic Commerce Program Office had been requesting monthly data from agencies on their FACNET and non-FACNET EC activities and, as the draft report noted, the response had not been good. GSA indicated that it had recommended to the Office of Federal Procurement Policy that EC statistical information be collected as part of the overall procurement data collection done through the Federal Procurement Data System. The Office of Federal Procurement Policy stated that to ensure more accurate and timely data on EC activities, it planned to modify the Federal Procurement Data System to collect EC statistical information. The Office of Federal Procurement Policy and GSA are working to get the change incorporated into the Federal Procurement Data System by fiscal year 1999. NASA’s comments are reprinted in their entirety in appendix II. DOD, GSA, and the Office of Federal Procurement Policy provided oral comments. Agency suggestions to improve the technical accuracy of the report have been incorporated in the text where appropriate. We are sending copies of this report to the Director, Office of Management and Budget; the Secretary of Defense; the Administrators for GSA and NASA; and other officials at the agencies included in our review. Copies will also be made available to others upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. List of Responding Agencies Comments From the National Aeronautics and Space Administration Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Office of General Counsel, Washington, D.C. John A. Carter The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on the classes of contracts in amounts greater than the micro-purchase threshold and not greater than the simplified acquisition threshold that are not suitable for acquisition through a system with full Federal Acquisition Computer Network (FACNET) capability, focusing on: (1) characteristics of contract actions that agencies found not suitable for FACNET processing and evaluated the reasonableness and consistency of agencies' explanations why they were unsuitable for FACNET; and (2) governmentwide electronic commerce (EC) statistics to determine agencies' use of FACNET and other EC purchasing methods. GAO noted that: (1) senior procurement officials generally found contracts unsuitable for FACNET when: (a) widespread public solicitation of offers was inappropriate; (b) transmitting essential contracting information through the network was not practical or feasible; or (c) alternative purchasing methods were faster and more efficient; (2) the agencies provided clear, reasonable, and consistent business and technical reasons why numerous types of contracts should be excluded from mandatory FACNET processing; (3) available data showed continuing limited use of FACNET for contract awards; (4) however, there is no governmentwide data available on agencies' use of other EC purchasing methods; (5) consequently, it is difficult to assess the government's overall progress in doing business electronically in a standard way; and (6) governmentwide EC statistical information may not be available until the year 2000.
Introduction Many Persian Gulf War veterans have complained of illnesses since the war’s end in 1991. Over 100,000 of the approximately 700,000 Gulf War veterans have participated in health examination programs established by the Department of Defense (DOD) and the Department of Veterans Affairs (VA). Many of those examined reported health complaints, including fatigue, muscle and joint pain, gastrointestinal problems, headaches, depression, neurologic and neurocognitive impairments, memory loss, shortness of breath, and sleep disturbances. Many veterans claim that their medical symptoms, some of them debilitating in nature, were not present before their service in the Persian Gulf War. Some veterans suspect that their health problems may be linked to chemical or biological warfare agents that Iraq may have used during the Gulf War. Various organizations have researched the causes of Gulf War illnesses—the source of much controversy over the past 7 years. By the end of 1996, DOD and the VA together had funded 82 research projects related to Gulf War illnesses. Despite these efforts, it remains unclear why some Gulf War veterans became ill following their service in the Persian Gulf War. It also remains unclear whether the rates of reported illnesses for veterans that deployed to the Gulf are higher overall than the rates for those that did not deploy or than the rates for the civilian or military population as a whole. Also unexplained are differences in the frequency of symptoms reported by reserve units and active duty units and any correlations between the location of units and the occurrence of particular illnesses. Research designed to answer these and many other Gulf War illnesses-related questions will not be completed for years. Of the 151 current federally sponsored research projects, less than 25 percent have been completed, and many are not scheduled for completion until after 2000. Establishment of the Persian Gulf Illnesses Investigation Team Prompted by the continuing controversy over Gulf War illnesses, President Clinton, in 1995, ordered DOD and other federal agencies to reexamine whether possible exposure to chemical or biological agents occurred during the Gulf War. In March 1995, the Deputy Secretary of Defense established the Persian Gulf Illnesses Investigation Team within the Office of the Assistant Secretary of Defense for Health Affairs to explore this question. The Investigation Team was established as DOD began to lose credibility among veterans and veterans’ groups in its efforts to determine the causes of Gulf War illnesses and to support the problems experienced by veterans. The 12-member team included intelligence officers, an Army Chemical Corps officer, a pilot, a chemist, a physician, and a criminal investigator. Beginning in 1991, senior Defense officials had taken the position, in testimony before the Congress and in press interviews, that Iraq did not use chemical or biological weapons during the Persian Gulf War and that no U.S. forces were exposed to chemical or biological agents. DOD officials maintained this position as late as 1994. This position came under attack because both U.S. and foreign detection teams had reported that chemical warfare agents were present on the battlefield. In 1995 and 1996, Central Intelligence Agency and U.N. reports established that during the Gulf War, Iraq had stored rockets filled with sarin, a deadly chemical warfare agent, at an ammunition storage site located at Khamisiyah, Iraq, about 60 miles from Kuwait’s border. In June 1996, DOD announced that U.S. troops at Khamisiyah in March 1991 were likely to have destroyed a bunker of rockets containing chemical agents. By July 1997, DOD acknowledged that U.S. troops near Khamisiyah may have unknowingly been exposed to low levels of sarin when they used demolitions to destroy these rockets. In the midst of this controversy, DOD became dissatisfied with the results of the Investigation Team’s efforts. The Investigation Team did not have the resources needed to accomplish its mission. For example, it was unable to follow up on more than 1,200 toll-free calls received on DOD’s hot line with Gulf War veterans. In addition, its operation was criticized in the December 1996 report by the Presidential Advisory Committee on Gulf War Veterans’ Illnesses. The report cited, for example, the Investigation Team’s failure to take advantage of its unique access to classified and routine military records to fully investigate and help answer the public’s questions about veterans’ possible exposure to chemical and biological warfare agents. A DOD team asked by the Deputy Secretary of Defense to evaluate DOD’s responses to Gulf War illnesses concluded that DOD’s work in this area needed a broader focus, a strategy for systematically examining the various theories concerning the nature and causes of Gulf War illnesses, and a method of effectively communicating DOD’s findings to U.S. veterans and the public. On November 12, 1996, the Deputy Secretary of Defense established the Office of the Special Assistant for Gulf War Illnesses (OSAGWI). OSAGWI’s Mission and Implementation Strategy The goal of restoring public confidence in DOD shaped the mission and organizational focus of OSAGWI. OSAGWI’s mission was broadly defined as ensuring that (1) veterans of the Gulf War are appropriately cared for, (2) DOD is doing everything possible to understand and explain Gulf War illnesses, and (3) DOD puts into place all required military doctrine and personnel and medical policies and procedures to minimize any future problems from exposure to chemical and biological warfare agents and other environmental hazards. Although OSAGWI’s mission statement charges it with ensuring that veterans are appropriately cared for, specific responsibility for providing health care to servicemembers still on active duty and for conducting the health research program continues to reside with the Office of the Assistant Secretary of Defense for Health Affairs. Similarly, VA remains the primary health care provider for those who have left military service. OSAGWI officials told us, however, that they assist servicemembers and veterans with health care matters related to Gulf War illnesses by providing them with referrals to sources of health care or helping them with the registration and examination processes associated with DOD’s Comprehensive Clinical Evaluation Program or the VA’s Persian Gulf Registry. OSAGWI also works with the Assistant Secretary of Defense for Reserve Affairs to (1) help ensure that reservists receive all entitled benefits and (2) recommend changes to legislation or rules where needed. At the time of our review, OSAGWI believed that its core activity involved investigating and reporting on incidents of possible exposure to chemical and biological warfare agents and investigating related military operations during the Gulf War. After OSAGWI has completed its investigation of an incident, the investigator writes a summation document called a case narrative. The purpose of OSAGWI’s case narratives is essentially to get all of the facts before the American people about what OSAGWI has learned from its investigation of an incident. The case narrative, a document updated as new evidence becomes known, is to contain all important investigative facts and OSAGWI’s assessment—in terms of “definitely,” “likely,” “indeterminate,” “unlikely,” or “definitely not”—of the likelihood that servicemembers were exposed to chemical or biological warfare agents. The standard OSAGWI used for its assessments was whether all available facts would lead a reasonable person to conclude that a chemical or biological warfare agent was or was not present. As of January 1, 1999, OSAGWI had published a total of 19 reports—13 case narratives, 2 environmental exposure reports, and 4 information papers. At that time OSAGWI also had 27 active investigations under way. Appendix III lists OSAGWI reports and their dates of publication as well as OSAGWI’s active investigations. Objective, Scope, and Methodology On July 8, 1997, the Ranking Minority Member of the House Committee on Veterans Affairs asked us to examine OSAGWI operations. Specifically, we were asked to (1) describe DOD’s progress in establishing an organization to address Gulf War illnesses issues and (2) evaluate the thoroughness of OSAGWI’s investigations into and reporting on veterans’ potential exposure to chemical or biological agents during the Gulf War. We did not review OSAGWI activities to coordinate and monitor research on the causes of Gulf War illnesses because this subject is addressed by other reviews. To determine DOD’s progress in establishing an organization to address Gulf War illnesses issues, we obtained briefings from OSAGWI officials covering the range of activities performed to fulfill their mission objectives and reviewed associated documentation. OSAGWI had issued eight case narratives at the time we began our review. It pursued these eight cases first because they involved incidents that were the most prominent and controversial at the time. To evaluate the thoroughness of OSAGWI’s investigations and reporting on veterans’ possible exposures to chemical or biological warfare agents, we reviewed six of these eight case narratives. The case narratives we selected for review were (1) “Reported Mustard Agent Exposure”; (2) “U.S. Marine Corps Minefield Breaching”; (3) “Fox Detections in an Ammunition Supply Point (ASP) Orchard”; (4) “Al Jubayl, Saudi Arabia”; (5) “Al Jaber Air Base”; and (6) “Reported Detection of Chemical Agent, Camp Monterey, Kuwait.” We did not review the case narrative about the alleged exposure to chemical warfare agents at Khamisiyah, Iraq, because it was already being heavily reviewed by other organizations, such as the Presidential Advisory Committee on Gulf War Veterans’ Illnesses and the Senate Committee on Veterans Affairs’ Special Investigation Unit. We also did not review the “Possible Chemical Agent on SCUD Missile Sample” case narrative because it appeared to be less controversial than the other case narratives. In reviewing each case narrative, we generally used as criteria OSAGWI’s methodology, which had itself been derived from the United Nations and other international community protocols for investigating chemical warfare incidents. This methodology included (1) substantiating the incident by searching for documentation from operational, intelligence, and environmental logs; (2) documenting the medical reports related to the incident; (3) interviewing appropriate people; (4) obtaining information available to external organizations; and (5) assessing the results. We also used the criterion that the case narrative should accurately and fully disclose all materially significant information relevant to the investigation of the incident in order to avoid any appearance that OSAGWI was selectively reporting what had actually happened. We initially traced each statement in the published case narrative to its underlying supporting document to identify the accuracy and completeness of the text in the narrative. For those statements missing adequate supporting documentation, we requested that OSAGWI provide us with the appropriate documentation. We also reviewed additional documentation collected by the OSAGWI investigators in performing the investigation, even though some of this documentation might not have been cited in the published narrative. We looked for any inconsistencies in information that was not addressed in the published narrative. In addition, for the selected case narratives, we contacted 71 individuals interviewed by OSAGWI that were key sources of information and requested that they verify the accuracy and completeness of both the OSAGWI case narrative and the OSAGWI write-up of the investigator’s discussions. We also contacted some key participants not originally interviewed by OSAGWI to determine whether other relevant information was available that might affect OSAGWI’s assessment of possible exposures to chemical warfare agents. Finally, we contacted several Gulf War veterans organizations, including the following: the American Legion; the Disabled American Veterans; the Veterans of Foreign Wars; the National Gulf War Resource Center; GulfWatch; the Desert Storm Justice Foundation; the Operation Desert Storm/Shield Association; the Gulf War Veterans of Long Island, New York; and the Chronic Illnesses Net for Persian Gulf Veterans. We asked them to provide us with any information they had that refuted or added to the OSAGWI information. We did not systematically approach veterans’ groups to obtain their assessments of overall OSAGWI effectiveness because this was beyond the scope of our review. To further verify the case narratives, we interviewed officials and obtained pertinent documentary evidence from officials at the following locations: OSAGWI, located in Falls Church, Virginia; the U.S. Army Chemical and Biological Defense Command at Aberdeen, Maryland; the U.S. Army Chemical Center and School at Ft. McClellan, Alabama; the Office of the Surgeon General of the Navy, Washington, D.C.; the Naval Health Research Center, San Diego, California; the Department of Veterans Affairs, Washington, D.C.; the Deployment Surveillance Team, which operates the Comprehensive Clinical Evaluation Program, Falls Church, Virginia; and the U.S. Army Gulf War Declassification Project, Falls Church, Virginia. We conducted our review from September 1997 to January 1999 in accordance with generally accepted government auditing standards. OSAGWI Has Made Progress in Addressing Issues Related to Gulf War Illnesses In the face of severe criticism by veterans, veterans groups, and others of its handling of Gulf War illnesses issues, DOD committed additional resources to its efforts to determine the cause of veterans’ health problems. With greater resources and a much broader mandate than its predecessor, OSAGWI has made significant progress in reestablishing communications between DOD and veterans. In addition, OSAGWI is actively engaged in identifying improvements DOD needs to make to protect servicemembers on contaminated battlefields. DOD Increases Emphasis on Determining Cause of Gulf War Veterans’ Health Problems DOD is investing significantly more resources for OSAGWI’s investigations and outreach efforts than it did for the Persian Gulf Illnesses Investigation Team. In 1996, the Investigation Team operated with a staff of 12 persons and a budget of $4.1 million. In contrast, as of October 9, 1998, OSAGWI had a staff of about 200 persons and a fiscal year 1998 budget of $29.4 million. In addition, OSAGWI was given much broader authority than the Investigation Team. Finally, OSAGWI reports directly to the Deputy Secretary of Defense; the Investigation Team reported to the Assistant Secretary of Defense for Health Affairs. OSAGWI officials said that with an adequate budget and sufficient operating authority within DOD, they were generally unconstrained in their efforts to pursue OSAGWI’s mandate. According to these officials, OSAGWI’s operations have been fully funded, and OSAGWI has had largely unrestricted access to personnel, files, and other data necessary for its work. For example, OSAGWI has had full access to classified information from the military services and intelligence agency sources. To date, OSAGWI has over 12 million pages of classified information in its computerized database and approximately 500,000 additional pages of classified data in hard-copy format. The Special Assistant (the head of OSAGWI) has been free to staff OSAGWI according to his needs. This authority has made it possible for him to obtain the expertise needed for OSAGWI’s investigations. From the start, OSAGWI management decided to make extensive use of contractors to quickly obtain personnel with specific expertise and maintain the flexibility to change the mix of staffing as needed. By October 9, 1998, 173 (87 percent) of OSAGWI’s personnel were contractor employees. As needed, OSAGWI has obtained specialized expertise from individuals in various governmental agencies, such as the Central Intelligence Agency, the Defense Intelligence Agency, and the Army’s Chemical and Biological Defense Command. OSAGWI also has the authority to contract with private organizations to perform specialized functions. OSAGWI Has Improved Communications With Veterans A key element of OSAGWI’s attempt to regain credibility with veterans, veterans’ organizations, and the public was to improve communications with them. OSAGWI recognized that major improvements were needed from earlier DOD efforts to listen to veterans’ concerns and incorporate the information they provided into DOD’s investigations and help provide health referral services to veterans. Our review confirmed that OSAGWI has made significant progress in establishing communications with veterans and others. OSAGWI established an E-mail address and encouraged veterans and others to use both this and the DOD toll-free hotline to communicate with OSAGWI regarding Gulf War illnesses issues. Within the first year of operation, it received almost 1,200 letters and 2,700 E-mail messages. OSAGWI staff contacted over 3,900 veterans through personal telephone calls, which included the vast majority of the Investigation Team backlog of unanswered calls from 1,200 veterans. According to OSAGWI, as of January 1, 1999, it had received 2,850 letters and 4,906 E-mail messages and answered 2,803 and 4,866, respectively. OSAGWI used a staff specifically trained to deal with Gulf War veterans’ concerns, obtain information from veterans, provide information about OSAGWI activities, and make referrals for those needing medical support from DOD or VA. OSAGWI uses a variety of methods to disseminate information on its operations. For example, it uses a Web site called GulfLINK on which it publishes its case narrative reports, information papers, and much of the supporting documentation used in its investigations. OSAGWI reports that this site typically receives over 60,000 inquiries each week. OSAGWI also publishes a bimonthly newsletter called GulfNEWS. Over 12,000 individuals receive the newsletter. OSAGWI’s leadership and staff have met with veterans at 18 town hall meetings and made appearances at 41 national veterans conventions. In addition, OSAGWI officials frequently meet with veterans and military service organizations to discuss Gulf War illnesses topics of interest to them. Finally, OSAGWI communicates directly with veterans that are affected by its investigations. After OSAGWI completes an investigation and publishes the corresponding case narrative, it sends to each affected veteran a letter that contains a synopsis of the investigation’s results. For example, following its investigation of the potential chemical warfare agent exposure in Khamisiyah, Iraq, OSAGWI sent letters to 97,837 veterans concerning the possibility that they might have been exposed to low levels of sarin, a chemical warfare agent. OSAGWI Has Identified Chemical and Biological Warfare Force Protection Issues Requiring Attention According to OSAGWI officials, OSAGWI must go beyond investigating and reporting on possible veterans’ exposures to chemical or biological warfare agents and identify ways to better protect servicemembers from nontraditional battlefield threats. From its investigations and reports on possible veterans’ exposures to chemical, biological, or environmental agents, OSAGWI has identified force protection issues that need improvement. These lessons learned generally fall into the following three categories: how to build trust and confidence in DOD, how to better account for what happened on the battlefield, and how to better protect servicemembers on the battlefield. Specific examples of the lessons learned include the need for institutionalizing a veterans’ outreach capability after OSAGWI is disestablished; improving systems for tracking troop movements during a conflict so that accurate data is available to show where individuals or units were located on the battlefield at any point in time; improving wartime records development and post-war records management systems and addressing issues such as the lack of a uniform records management program for joint commands; improving chemical and biological warfare agent detection equipment to make it less prone to false alarms and requiring doctrinal changes to collect and retain detector-produced printouts of detections; implementing techniques to better safeguard the health of deployed troops, such as deploying forward field laboratories early and taking samples to determine whether contamination may have occurred subsequent to the use of depleted uranium ammunition; and improving and implementing depleted uranium training programs. OSAGWI is presently working with DOD agencies to implement the lessons learned. Discussions by the Special Assistant with the Director of the Joint Staff and the military service Chiefs of Staff resulted in revised Joint Staff policy concerning record-keeping by joint commands. OSAGWI was also instrumental in developing a DOD-initiated requirement for the military services to review their depleted uranium training programs. We did not review what impact OSAGWI’s lessons learned have had toward making changes within DOD. Until recently, OSAGWI had no office for monitoring and measuring the extent to which OSAGWI lessons learned were being acted upon. In October 1998, the Special Assistant created a new OSAGWI directorate to focus attention on ensuring that lessons learned are effectively communicated to and implemented by the responsible DOD agencies. Some Case Narratives Have Investigative and Reporting Weaknesses We reviewed six of the eight case narratives OSAGWI had published at the time we began our review to evaluate the thoroughness and accuracy of OSAGWI’s investigations. OSAGWI generally followed its investigation methodology and used appropriate investigative procedures and techniques. However, we found significant weaknesses in the scope and quality of OSAGWI’s investigations for three of the six cases: the Reported Exposure to Mustard Agent, the Marine Minefield Breaching, and the Al Jubayl, Saudi Arabia, case narratives. Also, OSAGWI did not use DOD or Department of Veterans Affairs medical databases on Gulf War illnesses in conducting any of the six investigations. Despite the weaknesses we noted, in all but one case—the Marine Minefield Breaching case—we found no basis to question OSAGWI’s determinations of the likelihood that chemical warfare agents were present. Except for failing to take advantage of the VA and DOD medical databases, we did not find significant weaknesses in the remaining three cases: the Camp Monterey, the Al Jaber Airfield, and the ASP Orchard case narratives. In investigating these cases, OSAGWI followed its methodology, identified and interviewed important witnesses, appropriately used information from other key sources, included all important information in the case narratives, and accurately presented the information found. These investigations were performed in a generally thorough manner, and the evidence collected by OSAGWI supported its assessments. OSAGWI officials told us that they have revised their internal review processes for conducting and reporting investigations. They said that (1) improvements to these processes have evolved since the publication of the six case narratives we reviewed, (2) some of the process revisions were influenced by the findings we reported as our review progressed, and (3) enhancements to their processes would considerably minimize the recurrence of similar weaknesses in future case narratives. OSAGWI’s Investigations and Reporting Procedures Have Various Weaknesses Our review of the six selected case narratives disclosed some weaknesses in the investigations and in the accuracy and completeness of OSAGWI’s reporting. OSAGWI’s investigations were usually conducted in accordance with the established methodology. OSAGWI also generally identified and interviewed the appropriate witnesses, obtained relevant evidence and information, accurately documented witness testimonies, and otherwise generally used appropriate investigative techniques and procedures. However, we found that three of the six selected case narratives still contained significant investigative and reporting problems. The types of problems varied. In three of the six case narratives, we found investigative problems such as failures to (1) follow up with appropriate individuals to confirm key evidence, (2) identify or ensure the validity of key physical evidence, (3) include important information, and (4) interview key witnesses. Following is a more detailed description of the three case narratives containing most of these weaknesses. Case Narrative on Reported Exposure to Mustard Agent This case narrative addresses the reported exposure of an individual soldier to mustard agent while he was exploring an Iraqi bunker. OSAGWI assessed this incident as a “likely” exposure. OSAGWI’s assessment of this case has been highly controversial. Some veterans organizations and others believe that the evidence presented in OSAGWI’s case narrative and the Army’s presentation of the Purple Heart medal to this soldier for his injuries warranted an assessment of “definite” exposure. However, we found that this case was affected by many investigative and evidentiary problems. Some of these are more closely associated with shortcomings in DOD procedural practices during the Gulf War than with how OSAGWI did its investigation. Despite the problems identified, we believe that OSAGWI’s original assessment of “likely” exposure remains appropriate for this case. Incident Synopsis According to OSAGWI’s case narrative, the soldier (an Army armored cavalry scout) was exploring enemy bunkers in southeastern Iraq near Kuwait’s border on March 1, 1991. He entered one bunker through a tight passageway and twice brushed against the bunker’s doorway and wall. About 8 hours later, he began to experience a stinging pain on the skin of his left upper arm. Three hours later, blisters had formed there. About 15 hours after the exposure, the company medic checked the soldier’s blisters and suspected a heater burn. Eight hours later, after more blisters had formed on the soldier’s arm, aid station medical personnel suspected he might be a casualty of blister agent, treated him, and evacuated him to the company support battalion. There, an Army physician photographed the blisters and confirmed the diagnosis of exposure to a blister agent. An Army chemical officer also observed the soldier’s blisters and examined his clothing. He observed a wet spot on the soldier’s coveralls. The officer took the coveralls to a Fox vehicle for testing. From its tests on March 2, 1991, the Fox vehicle reportedly confirmed the presence of a mustard chemical warfare agent. After this positive test, the soldier’s coveralls were buried at the scene in Iraq as contaminated waste. On March 3, 1991, a senior medical officer (a physician and an expert in chemical warfare agents who was also at the time the Commander of the U.S. Army Medical Research Institute of Chemical Defense) examined the soldier’s blisters and concluded that they had been caused by exposure to a liquid mustard agent. This officer based his diagnosis largely on (1) the latent period of 8 hours between exposure and the first symptoms, which is characteristic of mustard exposure and (2) the absence of any other known chemical compounds present on the battlefield that have this characteristic. On March 4, 1991, following an order from chemical officers at the division level to confirm the positive results from the first day of Fox vehicle testing, tests on the soldier’s flak vest were performed by two Fox vehicles—apparently because the vest had not been buried along with the coveralls. Initially, both Fox vehicles registered the potential presence of chemical warfare agents, but only one was apparently able to confirm the presence of mustard agent. At the bunker complex where the soldier was injured, a Fox vehicle also initially detected a chemical warfare agent but was unable to confirm the presence of mustard or any other chemical warfare agent. The case narrative reported that an in-theater analysis of the soldier’s urine tested positive for thiodiglycol, a breakdown product of mustard agent. It also reported that a second urinalysis was performed by the U.S. Army Medical Research Institute of Chemical Defense at Aberdeen Proving Ground, Maryland. This analysis found no evidence of thiodiglycol. Clothing samples were also sent to the U.S. Army Chemical Research, Development and Engineering Center for analysis. Tests of these items also revealed no evidence of any chemical warfare agent. However, the negative test results from one of the urinalyses were not considered unusual due to the low level of the exposure. OSAGWI based its assessment of “likely” exposure primarily on the following factors: (1) the medical assessments of two physicians who examined the soldier—a senior medical officer and a physician who had recently been trained to identify chemical warfare agent injuries; (2) the latent period of 8 hours between the soldier’s exposure and his first symptoms, which is consistent with exposure to mustard agent; and (3) the positive detections of mustard agent made in-theater from analyses of the soldier’s clothing and urine. Our Review of OSAGWI’s Investigation We agree with OSAGWI’s assessment that exposure to a chemical agent was “likely.” However, we found several investigative procedural problems with this case, primarily concerning insufficient follow-up with witnesses, failure to interview key officials about tests conducted on the soldier’s clothing, and uncertainties about the identity and validity of key physical evidence sent to the United States for testing. First, information we discovered causes us to question the existence of the soldier’s positive in-theater urinalysis for mustard agent. OSAGWI based the existence of this test on an Army Central Command message reporting a positive in-theater test for thiodiglycol. However, OSAGWI was unable to find any documented test results from this urinalysis, and OSAGWI investigators did not perform sufficient follow-up with the involved individuals to verify that this test had actually taken place. In discussing what OSAGWI knew about the positive in-theater urinalysis, we learned that OSAGWI had not interviewed either the senior medical officer or the officer who wrote the message describing the positive in-theater analysis during its investigation. Instead OSAGWI relied upon the senior medical officer’s testimony to the Presidential Advisory Committee, his medical journal article, and his review of OSAGWI’s draft case narrative. However, this procedure failed to identify important information. In early 1998, our subsequent interviews with the senior medical officer and OSAGWI’s interviews with him revealed that he was unaware of the existence of any in-theater urinalysis involving the soldier. He also stated that, because of his position in the theater as the head of a team of scientists responsible for assessing any chemical casualties, he would have known about the existence of any positive urinalysis performed there. We then contacted the officer who had written the Army Central Command message and asked him about his basis for reporting the positive urinalysis. He told us that his message was based on 3rd Armored Division reports that the senior medical officer had found thiodiglycol in the soldier’s urine specimen. The available evidence is thus contradictory and insufficient to establish that this test actually occurred. Second, the results of the tests conducted on March 2, 1991 (the first day of testing), for mustard agent on the soldier’s clothing cannot be confirmed with the available documentation, and OSAGWI did not interview some key officials involved in the case about the tests. According to the Commander of the Fox vehicle involved, the Fox tests on the soldier’s clothing conducted on March 2, 1991, indicated the presence of blister agent on the soldier’s coveralls. However, the Fox printout of the test results was apparently lost. We located and interviewed the Fox test operator involved, who told us that several tests were conducted on the soldier’s clothing that day and that there was one positive confirmation for mustard agent. During our review, OSAGWI found a printout from one of these tests in its files, but it was negative for chemical agent. We noted that this printout had not been logged into OSAGWI’s document receipt system. We also noted that OSAGWI had never interviewed the Fox vehicle Commander in person or the operator who conducted the tests. OSAGWI relied upon information provided by E-mail from the Commander of the Fox vehicles involved because he was then stationed in Germany and could not easily be interviewed in person. OSAGWI said it did not interview the test operator because it could not locate him. On the second day of Fox testing, the Fox Commander returned with both the original and a second Fox vehicle to confirm the positive test results from the first day of Fox testing of the soldier’s clothing. One of the Fox vehicles was unable to confirm the presence of mustard agent on the soldier’s flak vest because of a high concentration of oil products on the vest. The other Fox vehicle, whose detailed confirmatory procedure was videotaped by a crewmember but for which the printout is unavailable, did show the presence of mustard agent. DOD sent the printout from the original Fox vehicle and the videotape from the second one to the U.S. Army Chemical and Biological Defense Command at Aberdeen Proving Ground, Maryland, for analysis. A Command expert found that the surviving printout did not confirm the presence of chemical warfare agent when the detailed confirmatory procedure was performed. However, after examining the printout and viewing the videotape, this official concluded that the incident had involved an actual mustard agent detection. We found other procedural discrepancies that raise questions regarding this case. First, DOD did not adequately identify or ensure the validity of important physical evidence. We noticed a difference between the inventory of items that the Commander of the Fox vehicles had reportedly packaged for shipment back to the United States for analysis and the items that were received at the U.S. Army Chemical Research, Development and Engineering Center. The Commander reported on his inventory list that he did not include samples from the soldier’s coveralls since they were unavailable; however, the Center’s inventory showed receipt of such samples. When we interviewed the Commander, he told us that he believed the sample material was in fact from the Commander’s own protective suit that he wore during the Fox vehicle testing. These discrepancies raise the possibility that either someone recovered the soldier’s coveralls and then repackaged the contents for shipment to the United States or that at least some of the clothing sent back to the United States for testing was not the soldier’s. The circumstances surrounding the testing of the soldier’s clothing in-theater thus remain unclear. It is impossible to determine whether the samples are actually from this soldier. In discussing the investigative weaknesses we found, the OSAGWI lead investigator told us that this investigation had begun under the Investigation Team before OSAGWI was established and that the case was carried over to OSAGWI. She said that the case’s outcome appeared to be obvious on the surface—particularly since the soldier had received a medical diagnosis indicating exposure to mustard agent. She said that the investigation process at OSAGWI has matured since this case narrative was published. She also said that OSAGWI would do more cross-checking of the facts if this investigation were being done today. Despite the investigation’s shortcomings, we believe that OSAGWI’s assessment of “likely” exposure to a chemical warfare agent in this case is reasonable. The senior medical officer’s clinical diagnosis that the soldier’s injuries were caused by exposure to mustard agent is significant in that this expert in chemical warfare agents made his assessment contemporaneously at the time of the injury and continues to believe that the latent period of 8 hours from exposure to the first symptoms supports his diagnosis. In addition, an expert at the U.S. Army Chemical and Biological Defense Command, after reviewing the Fox vehicle printout and viewing a videotape of another Fox vehicle conducting tests, concluded that this incident involved a valid detection of mustard agent. However, we believe the lack of confirmation of exposure through urinalysis or retained confirmatory printouts from the Fox vehicles involved prevents OSAGWI’s exposure assessment in this case from being classified as “definitely.” Marine Minefield Breaching Case Narrative This case narrative addresses reports that U.S. Marines might have been exposed to chemical warfare agents while breaching minefield barriers on the first day of Operation Desert Storm’s ground war. OSAGWI concluded that the presence of chemical warfare agents was “unlikely” during this incident, in part because it found that no mechanism was present for delivering such agents. However, we found that OSAGWI overlooked information indicating that a means for delivering chemical warfare agents might have been present, and that the case narrative does not include other relevant information indicating that chemical warfare agents might have been present. We believe that these shortcomings are sufficient to cause a reasonable person to question OSAGWI’s assessment. Incident Synopsis On February 24, 1991, the first day of Operation Desert Storm’s ground war, Marine Corps forces breached two rows of minefields that stretched for miles near the border between Saudi Arabia and Kuwait. As they passed through the first row of minefields, two Fox vehicles (one assigned to units of the 1st Marine Division and another assigned to the 2nd Marine Division) indicated potential detections of chemical agents. The detection by the 1st Division’s Fox vehicle was described as a trace detection of such a small magnitude that no official report of the detection was made and no Fox printout was kept to document the detection. OSAGWI concluded that the presence of chemical warfare agents in the 1st Division area was “unlikely.” The detection by the 2nd Division’s Fox vehicle, however, indicated the potential presence of mustard, sarin, and lewisite—all chemical warfare agents. In this instance, the Fox vehicle printouts were kept, but because of the hostile environment, the Fox vehicle was not stopped to perform a more detailed confirmation procedure to conclusively determine whether chemical warfare agents were present. One possible chemical warfare agent injury was reported during the breaching: a 2nd Division Marine riding in an amphibious assault vehicle at the time of the detection claimed his hands were burned, presumably by a chemical warfare agent, as he closed the vehicle hatch after hearing the Fox vehicle alert by radio. However, the validity of this reported injury was controversial. Some witnesses supported the Marine’s claim that his hands were blistered, but the examining physician stated that the Marine had no injury of any kind. In investigating the breaching incident, OSAGWI interviewed key participants in the breaching operations, including members of the Fox vehicle crews, chemical warfare specialists, some unit commanders, the Marine who claimed to have been injured, other Marines from the injured man’s unit, and the medical personnel who examined him. The investigators also reviewed unit logs and other pertinent documentation, including classified data, and consulted with Fox vehicle and chemical weapons technical experts. On the basis of reviews of the 2nd Division Fox vehicles’ printouts by three different laboratories, OSAGWI concluded that the Fox vehicle detections were false alarms, probably caused by the high concentrations of smoke from oil well fires and petroleum particles in the atmosphere. OSAGWI further indicated that except for the possible injury to one Marine, no other troops reported claimed chemical warfare agent injuries. In its overall assessment of the incident, OSAGWI stated that the presence of chemical warfare agent was “unlikely.” In supporting its assessment, OSAGWI stated that since no chemical land mines were ever found in Kuwait and since no artillery fire was encountered by the Marines who breached the first row of mines, there was no delivery mechanism for chemical warfare agents. Our Review of OSAGWI’s Investigation OSAGWI overlooked a key piece of evidence and did not report other significant information in its case narrative. OSAGWI concluded that the Marines had encountered no Iraqi artillery fire as they moved through the first row of Iraqi minefields. This conclusion was based on comments made by the commanding officer and others of the Marine company that carried out the minefield breach where the 2nd Division Fox vehicle reported the presence of a chemical warfare agent. However, our review of OSAGWI files disclosed a Marine Corps unit log entry indicating that Iraqi artillery and mortar fire was present during the first minefield breach. The OSAGWI investigator told us that he had inadvertently overlooked this information during his investigation. We also interviewed Marines who told us that Iraqi artillery and mortar fire was present as they passed through the first minefield. Consequently, we believe a delivery mechanism for chemical warfare agent may have been present. Also, the timing of events was significant. For example, the log entry indicating that enemy artillery was encountered was made around 6:15 a.m. on February 24, 1991. The Fox vehicle detection was made at 6:22 a.m. of that same day. The Marine who claimed to be injured was riding in an amphibious assault vehicle that was following the Fox vehicle. He said his injury occurred just after he heard the Fox vehicle’s report of the chemical warfare agent detection over the radio. We also learned that the Commander of the 2nd Division’s Fox vehicle told OSAGWI investigators that chemical detection paper taped to the outside of the Fox vehicle was noted to have changed colors after passing through the first minefield (indicating possible contact with a chemical agent). However, this information was not reported in OSAGWI’s narrative. The OSAGWI investigator said that this information was omitted because technical experts had told him that the detection paper could change colors because of the heavy concentrations of petroleum products in the air coming from the oil well fires the Iraqis had set. Furthermore, as mentioned in the case narrative, three different laboratories had reviewed the Fox vehicle printout and concluded that the detections were probably false alarms. The narrative did not point out, however, that one of the three laboratories had also said that it could not rule out the possibility of the presence of a chemical warfare agent. Finally, a classified document in OSAGWI’s files contained intelligence evidence not included in the narrative that could support the possibility of an Iraqi chemical attack. This information, some of which has since been declassified, refers to a report indicating the end of a chemical attack on February 24, 1991, the same date as this incident. OSAGWI was aware of this information, but because of its vagueness, unknown origin, fragmentary nature, and time of report (about 4 hours after the breaching event), it was not given much weight during OSAGWI’s analysis. We agree that the potential impact of this evidence is unclear. However, when combined with the other information we have cited, it provides additional cause for further investigation by OSAGWI, regardless of its potential for association with this case. We believe that OSAGWI’s assessment of “unlikely” in this case is subject to question. While the information we found does not conclusively prove that chemical warfare agents were present, it does increase the potential that some might have been present. In our opinion, the weaknesses we found in this case narrative are sufficient to warrant OSAGWI’s reconsideration of its assessment. We discussed our findings with OSAGWI investigators and officials, and they agreed that this information needs to be evaluated. OSAGWI officials told us they would include this information in their follow-up investigation of the minefield breaching incident and would address the questions we raised. Al Jubayl, Saudi Arabia, Case Narrative Regarding this case narrative about three significant events occurring in the Al Jubayl area during the Persian Gulf War, OSAGWI concluded that the presence of chemical warfare agents was “unlikely” for one of the events and “definitely did not occur” in the remaining two. We believe that the available evidence generally supports OSAGWI’s assessment, but OSAGWI is still performing work regarding alternate explanations for some events affecting this case. However, we also found that OSAGWI did not include important information in this case narrative regarding the unusually high levels of post-war veterans’ complaints of medical symptoms they associated with the incidents involved in this case. Furthermore, OSAGWI did not adequately identify and coordinate some of this information that could potentially provide evidence to help resolve research questions concerning whether there is a correlation between high levels of reported Gulf War illnesses symptoms and duty during the Gulf War at Al Jubayl. Incident Synopsis Al Jubayl is the largest of eight planned industrial cities in Saudi Arabia. It consists of an industrial zone and port facilities, as well as residential and other noncommercial areas. The Al Jubayl area was developed during the early 1980s along what was then essentially undeveloped coast line and was designed to take advantage of Saudi Arabia’s vast oil resources. Al Jubayl played a crucial role during the Gulf War—many U.S. and coalition military units either passed through or were stationed there. OSAGWI’s case narrative addresses three separate events that allegedly involved exposure to chemical agents in the Al Jubayl area: the “loud noise” event and alerts on January 19 through 21, 1991; an Iraqi SCUD missile attack on February 16, 1991; and a noxious fumes event on March 19, 1991, which some U.S. military personnel claim caused them to experience medical problems and turned portions of the T-shirts they were wearing from brown to purple. The need for OSAGWI to investigate these events was underscored by concerns about Gulf War illnesses expressed in a May 1994 report of the U.S. Senate’s Banking, Housing, and Urban Affairs Committee (known as the Riegle Committee) by veterans of Naval Mobile Construction Battalion 24 (NMCB-24). NMCB-24 was a reserve “Seabee” or military construction battalion of 724 enlisted persons and 24 officers. During Operation Desert Shield/Desert Storm, NMCB-24 was stationed alongside NMCB-40, an active duty “Seabee” battalion. Both units occupied Camp 13, a housing and billeting area located in the Al Jubayl industrial zone that was commanded by the senior officer of NMCB-40. The “Loud Noise” Event OSAGWI found that the “loud noise” event actually referred to several loud explosive-like noises and related events occurring between January 19 and 21, 1991. As stated in the OSAGWI narrative and confirmed by our review, early on January 19, a very loud noise like an explosion was heard throughout the Al Jubayl area. Units in the area subsequently reported additional explosions, went on alert, and conducted tests for the presence of a chemical warfare agent. A variety of confusing and contradictory actions subsequently occurred. All NMCB-24 tests for chemical warfare agent were officially reported as negative, but one member of this unit alleged that he had obtained positive test results for a chemical warfare agent in two of three attempts. British units in the vicinity initially reported positive tests for a chemical warfare agent, but detection teams sent to investigate these reports were unable to confirm any such agents. Some eyewitnesses from NMCB-24 reported a large fireball that illuminated the sky and medical symptoms such as runny noses, burning sensations, blisters, and numbness. They stated that those experiencing symptoms reported for medical attention within the next few days. However, other NMCB-24 personnel said that although they were unprotected during these events, they experienced no such symptoms. After reviewing NMCB-24’s medical logs, neither OSAGWI nor we found any records indicating that medical attention for these symptoms was sought on or shortly after January 19, 1991. OSAGWI and our interviews with the NMCB-24 Commander, medical personnel, and senior noncommissioned officers similarly revealed no evidence that any medical attention was sought. OSAGWI found, and we confirmed, that many coalition aircraft were engaged in the air war on the day in question, and Air Force records show that two coalition aircraft flew over the Al Jubayl area at supersonic speed during the early hours of January 19, 1991. OSAGWI concluded that the loud noise and related events were due to sonic booms from these aircraft. It also concluded that the presence of chemical or biological warfare agents was “unlikely” because (1) DOD records show that no SCUD missiles were launched toward Saudi Arabia by Iraq on January 19, (2) no verifiable tests in the Al Jubayl area were positive for chemical warfare agents, and (3) no records were found of any individual receiving treatment for symptoms associated with exposure to chemical or biological warfare agents. On January 20-21, 1991, air raid sirens and explosions were heard again in the Al Jubayl area, but available records reviewed by OSAGWI, and checked by us, indicated that chemical detection tests were again negative. OSAGWI again concluded that the presence of chemical or biological warfare agents was “unlikely” because (1) records show a SCUD missile aimed at Dhahran was intercepted and destroyed at high altitude by a Patriot air defense missile at approximately the same time as this incident, (2) there is no record of an impact site in the Al Jubayl area, and (3) no records were found of anyone receiving medical treatment for symptoms associated with exposure to chemical or biological warfare agents. The SCUD Missile Attack A second possible exposure of veterans to chemical and biological warfare agents in the Al Jubayl area occurred as the result of an Iraqi SCUD missile attack early in the morning of February 16, 1991. The OSAGWI narrative explains that U.S. national sensors detected this missile early in flight and provided warning of the launch. The missile landed in the waters of Al Jubayl harbor, and the site of impact was quickly found and marked by Coast Guard and Navy boat crews. Later that day, a Navy explosive ordnance disposal team surveyed the marked area with an underwater television system and located missile debris on the harbor’s bottom. Divers confirmed that the missile had broken apart and that the site contained an intact SCUD warhead, guidance section, rocket motor, and miscellaneous components. Recovery of the smaller SCUD components began on February 19 and concluded with the warhead on March 2. During the recovery operation, tests were conducted, but no evidence was found indicating the presence of chemical or biological agents. The Joint Captured Material Exploitation Center then took custody of the SCUD components, which were subsequently shipped to the Army Missile Command in Huntsville, Alabama. The Command’s evaluation of the recovered SCUD missile components confirmed that the warhead did not contain chemical or biological warfare agent. Some eyewitnesses to this event reported that the SCUD missile was intercepted and shot down by a Patriot missile and during this process could have dispersed chemical or biological warfare agents over Al Jubayl. A Patriot battery was defending Al Jubayl at the time. However, OSAGWI found and we confirmed that this battery was not operational for maintenance reasons at the time of the attack and therefore was not able to engage the SCUD. OSAGWI concluded in its case narrative that while an Iraqi SCUD missile had hit the waters of Al Jubayl harbor, it had not detonated, had caused no damage or injuries, had tested negative for chemical warfare agents, and therefore was definitely not armed with chemical warfare agents. The Purple T-Shirt Event The third known possibility of exposure to chemical agents at Al Jubayl occurred on March 19, 1991, when personnel from NMCB-24 were exposed to unidentified airborne noxious fumes. These fumes affected nine persons working in three separate groups. They experienced acute symptoms such as burning throats, eyes, and noses and difficulty in breathing. In addition, portions of the brown T-shirts being worn by these individuals turned purple, as did some of the individuals’ combat boots. Seven persons composing two of the groups immediately sought medical attention and returned to work with no further symptoms after showering and changing clothes. The two persons in the third group did not seek medical assistance and continued to work. The nine persons involved stated that they had experienced a choking sensation when a noxious cloud enveloped them. None saw the origin of the cloud, but all believed it had come from one of the industrial plants located nearby. Evidence collected by OSAGWI regarding the source of the noxious fumes was inconclusive. One eyewitness of the event said that he had seen purple dust falling in the area that was coming from a smokestack at a nearby fertilizer plant. The Navy’s Environmental and Preventive Medicine Unit No. 2 (EPMU-2) conducted an environmental/occupational hazard investigation and site visit to Al Jubayl in 1994. The resulting EPMU-2 study did not determine the source of the irritant. It noted, however, that the camp was located in a heavily industrialized area and that emissions from a petrochemical plant or from a spill within the camp’s motor park could have been the source of the irritant. The T-shirts and the boots that changed color were given to unnamed U.S. military and Saudi officials. However, the chain of custody cannot be identified, and no reports have been found other than an informal telephone call to NMCB-24 shortly after the incident indicating that “there was nothing to worry about.” The U.S. Army Material Test Directorate and the Natick Research Development and Engineering Center later conducted tests on the type of military T-shirts involved. The Natick tests showed that these T-shirts do turn purple when exposed to acids such as sulfuric (battery) acid or oxides from nitric acid. OSAGWI concluded that chemical warfare agents were definitely not involved in the purple T-shirt event. OSAGWI reached this conclusion because (1) the event occurred after the cessation of Gulf War hostilities, (2) there was no record of hostile attack during the time period of the event, and (3) the types of medical problems affecting the individuals involved and their rapid recovery are not consistent with exposure to chemical warfare agents. Our Review of OSAGWI’s Investigation As a result of our review of evidence, procedures, and other information obtained from OSAGWI and other sources regarding the Al Jubayl case narrative, we generally concur that OSAGWI’s assessments of whether chemical warfare agents were present are reasonable. The evidence generally supports OSAGWI’s assessment that chemical warfare agents were “definitely not” involved in the SCUD missile and purple T-shirt events. The loud noise incident involved some contradictions in evidence or testimony that we could not resolve, but our work confirmed the credibility of the vast majority of the evidence used by OSAGWI. We noted the existence of another potential explanation of some of the events involved in the loud noise incident. Some documents and other evidence we acquired from a veterans’ organization indicate that an Iraqi aircraft or a patrol boat might have been involved in an attempted chemical attack on Al Jubayl at the time of this incident. OSAGWI is currently investigating this version of events. However, pending the outcome of this continuing investigation, we believe that the currently available evidence still provides a reasonable level of support for OSAGWI’s conclusion that exposure to chemical warfare agents was “unlikely” in this incident. Although we concur with OSAGWI’s assessments in the Al Jubayl case, we believe that the case narrative is not complete and could be misleading because it does not mention the fact that many members of NMCB-24 have reported unusually high levels of health problems since their service in the Persian Gulf War. We also found that OSAGWI had not coordinated some information developed during this investigation with the Naval Health Research Center for inclusion in its Gulf War illnesses research on Seabees. OSAGWI’s Al Jubayl case narrative states that the methodology it used was designed to investigate reports of exposure to chemical warfare agents and to determine whether chemical weapons were used. OSAGWI officials told us that in this case they had expanded their methodology to include a considerable amount of information in the narrative regarding environmental cleanliness factors affecting the Al Jubayl area. They said they had done this in an effort to better explain the circumstances of the case because some veterans had expressed concern over the hazardous materials they could have been exposed to while they were in Al Jubayl. The narrative thus contained much information explaining that (1) Saudi environmental protection standards were equivalent to those of the U.S. Environmental Protection Agency, (2) these standards were monitored and maintained by the Saudis throughout Operation Desert Storm/Desert Shield, and (3) Saudi monitoring records indicate no detections that normal standards were exceeded on the date of the purple T-shirt incident. The environmental data included in the narrative, much of which was obtained by EPMU-2, thus indicated that Al Jubayl was no worse or better than comparable industrialized sites in the United States. We concur that OSAGWI’s decision to expand its stated methodology in order to include this information was appropriate. As indicated at the beginning of the narrative, OSAGWI’s charge is to investigate all possible causes of Gulf War illnesses. However, most of the information presented in this case narrative leads the reader to conclude that exposure to either chemical warfare agents or other chemical agents at Al Jubayl was “unlikely” and probably did not involve a health threat in the limited incident involving the purple T-shirts. The narrative mentions that some NMCB-24 veterans testified before the Congress (the Riegle Commission) but does not state why. The narrative text also contains no information regarding significant DOD actions taken to address the high incidence of post-war health problems reported by members of NMCB-24. DOD has long been aware of health problems reported by NMCB-24. In 1992, DOD began to identify clusters of military personnel who were complaining of medical symptoms they attributed to their Gulf War service. As a result, DOD initiated two field investigations. One of these, performed at the request of the Navy Surgeon General, was a study of illnesses reported by members and former members of NMCB-24 conducted during 1993-94 by the same unit (EPMU-2) that conducted the Al Jubayl environmental study. EPMU-2 personnel visited 6 of NMCB-24’s 12 detachments during this period, conducted a questionnaire study, performed medical examinations, reviewed military and other medical records, interviewed veterans and family members, and otherwise attempted to identify prevalent symptoms experienced by the members of NMCB-24 and diagnoses of their illnesses. Much of the information they collected was computerized and used to produce a series of tables and other statistical data relevant to Gulf War illnesses issues and included in EPMU-2’s final report. This report contained the following conclusions: A significant number of NMCB-24 veterans of the Gulf War have experienced an array of nonspecific symptoms since returning from the Persian Gulf. More than 41 percent of the veterans from three of the six detachments experienced 10 or more symptoms. No common syndrome or diagnosis was identified in these veterans. The diagnoses identified were the same as those that might be expected in a group of the same age that had not served in the Persian Gulf War. More research was needed. Our review of OSAGWI’s files, our visit to EPMU-2, our interviews of current and former EPMU-2 officials, and our review of all remaining EPMU-2 documentation related to this study revealed additional information. For example, 44 of the 67 witnesses OSAGWI interviewed regarding the facts of the loud noise incident are now reporting health problems they attribute to their service during the Persian Gulf War. A former EPMU-2 physician directly involved in the EPMU-2 study told us that while he had no factual baseline for comparison, it appeared to him that the frequency of symptoms found in NMCB-24 veterans was greater than the frequency to be expected in the general population. This observation, along with the high symptom rates, was one of the reasons the EPMU-2 report recommended more research. NMCB-24 veterans have been involved in testimony before the Congress regarding health problems they attribute to their service in the Persian Gulf War, and the Naval Health Research Center in San Diego, California, is currently performing a major, multiyear, Gulf War illnesses-related epidemiological study involving the vast majority of the Navy’s Seabees. NMCB-24 veterans have also been the subject of several additional research studies related to Gulf War illnesses. OSAGWI was aware of the existence of the EPMU-2 medical study and had a copy on file that was originally obtained by its predecessor, the Persian Gulf Illnesses Investigation Team, in 1996. However, no OSAGWI investigators visited EPMU-2 to review files regarding this study. No information regarding this study, the Naval Health Research Center research project, or other epidemiological studies or research on Gulf War illnesses was included in the case narrative. A high-ranking OSAGWI official told us that OSAGWI investigators had been instructed to consider such medical information as outside their charter for inclusion in the case narratives. This official said that they had been so instructed because this line of inquiry was more appropriately the responsibility of the Office of the Assistant Secretary of Defense for Health Affairs and because OSAGWI did not have the expertise to conduct or evaluate epidemiological studies such as the one performed by EPMU-2. We believe that much more information regarding the health complaints of NMCB-24 veterans should have been included in the case narrative. OSAGWI was aware of this information and could have included it without conducting or evaluating epidemiological studies. Including information developed by EPMU-2 regarding the environmental cleanliness of Al Jubayl but excluding EPMU-2’s report and other information specifically related to post-war health complaints by NMCB-24 veterans makes OSAGWI vulnerable to an appearance of bias. Such omissions tend to reinforce the beliefs of some that DOD is inappropriately withholding information. We also found that some information developed by OSAGWI might have significantly added to what is known about Gulf War illnesses issues involving NMCB-24 had OSAGWI coordinated the information with the Naval Health Research Center for use in its currently ongoing Seabee epidemiological study. For example, as determined by OSAGWI and reported in the Al Jubayl case narrative, both NMCB-24 and NMCB-40 were located at Camp 13 during Operations Desert Shield and Desert Storm. Complaints by NMCB veterans regarding post-war medical problems they attribute to Persian Gulf service are well known, having been the subject of several congressional hearings, various research efforts, and other activities addressing Gulf War illnesses issues. An OSAGWI official told us that interviews with selected NMCB-40 personnel indicated that personnel from this unit were not experiencing health problems of the same nature and extent as those reported by NMCB-24 veterans. Since NMCB-24 and NMCB-40 occupied the same camp at Al Jubayl, we believe that a determination of whether NMCB-40 veterans are encountering medical problems similar to those being reported by NMCB-24 veterans would be of considerable interest to those concerned with resolving Gulf War illnesses issues. The Naval Health Research Center study is obtaining for analysis a wide range of Gulf War illnesses-related information from current and former Seabees and plans to perform a multifaceted analysis of the information collected. In August 1998, Naval Health Research Center officials told us they had coordinated with OSAGWI officials regarding the Seabee study on several occasions but that OSAGWI officials had not informed them of the relationship between NMCB-24 and NMCB-40. The study’s methodology therefore did not include plans to specifically compare Gulf War illnesses information obtained from veterans of these two units. They acknowledged, however, that such comparisons could be conducted and that they might provide useful information. They said they would be willing to discuss adding such comparisons if OSAGWI officials requested that they do so. We believe such comparisons, especially regarding the extent and nature of post-war medical symptoms, might provide information important to OSAGWI’s investigation and reporting of Gulf War illnesses issues involving the Al Jubayl and other case narratives. OSAGWI officials agreed that the Al Jubayl case narrative needed to be modified to acknowledge the high rate of symptoms reported by members of NMCB-24 and that they would modify the case narrative accordingly. They also told us they would coordinate with the Naval Health Research Center regarding new information that might be developed through comparisons of NMCB-24 and NMCB-40 data in the Naval Health Research Center Seabee study. OSAGWI Did Not Use DOD and VA Medical Databases in Conducting Its Investigations for Cases We Reviewed DOD and the VA maintain databases that contain self-reported health information and clinical information on thousands of Gulf War veterans. Some of these veterans may have symptoms associated with Gulf War illnesses. Although OSAGWI’s methodology calls for the use of the DOD and VA databases in its investigations, we found it did not access them for the six case narratives selected for our review. Therefore, OSAGWI missed an opportunity to determine whether individuals involved in possible exposure incidents were also reporting symptoms in the databases. Information thus obtained could provide leads to help scope and guide the nature of the investigation and potentially could be combined with other evidence and research efforts conducted by DOD and others to help evaluate whether chemical warfare agents might have been present. Gulf War Illnesses Databases Maintained by DOD and VA In response to the complaints of many military personnel that returned from the Gulf War with health problems they believed were related to their deployment, DOD and VA created programs to track the health of Gulf War veterans. Information collected in these programs is stored in databases that describe the health status of a large group of Gulf War veterans who have undergone a standardized examination process to document their health. DOD’s Comprehensive Clinical Evaluation Program The multiphase Comprehensive Clinical Evaluation Program (CCEP) was implemented by DOD in June 1994 to provide a systematic clinical evaluation for the diagnosis and treatment of active duty military personnel who have medical complaints they believe could be related to their service in the Persian Gulf. Phase I of the CCEP consists of a medical history, physical examinations, and laboratory tests that are comparable to an evaluation conducted during an inpatient internal medicine hospital admission. CCEP participants are evaluated by a primary care physician at their local medical treatment facility and receive specialty consultations if deemed appropriate. The primary care physician may refer patients to phase II for further specialty consultations depending on the clinical findings of phase I. Phase II evaluations consist of targeted, symptom-specific examinations; laboratory tests; and consultations. During this phase, potential causes of unexplained illnesses are assessed, including infectious agents, environmental exposures, psychological factors, and vaccines. DOD maintains a database that summarizes the clinical evaluations of CCEP participants. The database shows self-reported complaints and symptoms from everyone and physician diagnoses for examined participants. In addition, the database shows unit assignments, medical complaints, diagnoses, and possible exposures of individuals who were part of units during the Gulf War that may have come in contact with chemical warfare agents or other environmental hazards. As of October 31, 1998, the CCEP database contained health information on 34,963 service members who had received clinical evaluations as a part of the program. VA’s Persian Gulf Registry The VA’s Persian Gulf Registry (VA Registry) was established in 1992. Any Gulf War veteran may participate in the registry, even if that person has no current health complaints. Like the CCEP, the registry consists of a two-phase examination process. During phase I, the veteran completes a standardized questionnaire on exposures during the Gulf War and health complaints and undergoes a physical examination with laboratory testing. Veterans who have health problems that remain undiagnosed after phase I are referred to more extensive phase II medical evaluations. VA maintains a database that summarizes the results of clinical evaluations of registry participants. It contains information on symptoms and complaints self-reported by veterans and diagnosed by physicians. It also contains information on exposures, birth defects, and undiagnosed illnesses. Like the DOD database, the registry database also contains information on which units the participants were assigned to during the Gulf War. As of July 31,1998, the VA Registry contained information on the health conditions of 70,051 Gulf War veterans who had physical examinations under the VA program. Identifying Program Participants Could Help OSAGWI Better Focus Its Investigative Efforts Each of the case narratives selected for our review describes possible chemical exposure incidents that involve individuals acting alone or as a part of larger units. Many of these individuals may have enrolled in either the CCEP or the VA Registry. OSAGWI could use this data to identify whether individuals involved in the incidents described in the case narratives might be experiencing health problems. Several of the case narratives included in our review describe events that could have been the subject of further analysis using the CCEP and VA Registry. For example, OSAGWI’s ASP Orchard case narrative describes chemical warfare agent alarms at an ammunition storage facility near an orchard outside Kuwait City, Kuwait. OSAGWI collected information from many of the personnel that inspected this facility and from a variety of other sources, such as the Central Intelligence Agency and the Defense Intelligence Agency. OSAGWI concluded that the alarms were false and that chemical warfare agents probably had not been stored at this facility. However, for the six case narratives we reviewed, OSAGWI investigators did not query the CCEP or the VA Registry in an attempt to determine whether any of the several personnel that inspected the site or any of the hundreds of other personnel encamped nearby had enrolled and had reported or been diagnosed with health problems. Although it would not be definitive, unusually high levels of participation accompanied by the reporting of certain health problems and possible exposures might have led OSAGWI to investigate further. Performing this investigative step would serve to enhance the credibility of OSAGWI’s case narratives and would confirm OSAGWI’s intention to investigate these events leaving no stone unturned. We noted that OSAGWI’s investigative methodology includes the use of the CCEP and the VA registry and that OSAGWI had used such an analysis in investigating the Khamisiyah incident and in developing its Depleted Uranium environmental exposure report issued on August 4, 1998. For example, in performing the investigation on depleted uranium, OSAGWI investigators queried the CCEP to determine whether an unusually high proportion of the participants involved in the case had experienced kidney damage—a possible medical effect of being exposed to depleted uranium. According to OSAGWI, the analysis showed that these CCEP participants did not suffer unusually high rates of kidney damage compared to the general U.S. population. Three Case Narratives Appear to Have Been Appropriately Investigated Except for not using the DOD and VA medical databases, the Al Jaber Air Base, ASP Orchard, and Camp Monterey case narratives generally did not have the weaknesses we found in the other three cases. In investigating these cases, OSAGWI followed its methodology, identified and interviewed important witnesses, appropriately used information from other key sources, included all important information, and accurately presented the information found. These investigations were performed in a thorough manner, and the evidence collected by OSAGWI convincingly supported its assessments. The Camp Monterey case is a good example. In this case, soldiers of the 8th U.S. Army Infantry Division were moving wooden Iraqi crates containing metal canisters out of a building in a bivouac area north of Kuwait City, Kuwait, so that it could be used to house troops. One of the canisters broke open, spilling a white powder-like substance and causing several soldiers to become ill. At the request of the local commander, two Fox vehicles tested the spilled substance. Both Fox vehicles initially reported detections of sarin, a deadly nerve agent, and this apparently led to some initial reports that soldiers had been exposed to a nerve agent. Later, mass spectrometer tests by these Fox vehicles confirmed that the substance was actually a relatively harmless riot control agent rather than sarin. OSAGWI found, and we confirmed, that after interviewing the personnel present (including the Fox crews) and after reviewing Fox crew and laboratory analyses of the Fox printouts, the initial alarm for sarin was an error. Similarly, in both the Al Jaber and ASP Orchard cases, initial Fox alarms for persistent chemical warfare agent could not be confirmed in some instances even by repeated attempts by the same Fox vehicles. OSAGWI concluded, and we agreed, that had the chemical warfare agents been present, they would have been detected in the repeated tests. OSAGWI Has Made Changes to Improve Its Investigative and Reporting Processes We believe that inadequate quality control procedures within OSAGWI contributed to the investigative and reporting problems discussed in this report. During our review of OSAGWI operations, we periodically briefed OSAGWI officials on the nature and types of weaknesses we had found and on our preliminary observations. OSAGWI officials agreed that they needed to improve their investigations and their reporting of the investigation results. They said that they have instituted several changes to their internal quality assurance practices that they believe will considerably strengthen their investigative and reporting processes. According to OSAGWI officials, their current investigative and reporting process has evolved over the 2 years since OSAGWI was established. Consequently, certain enhancements are now in place that were not present when the six case narratives we reviewed were published. More specifically, OSAGWI now requires its investigators to prepare a written investigation plan. The investigation plan must specify the information that will be obtained, the direction the investigation will take, and the schedule. The plan is expected to mirror the overall methodology adopted by the division within the Investigation and Analysis Directorate for its investigations. The division chief is to review the investigation plan and provide feedback to the investigator on the scope and direction of the investigation and the proposed schedule. Following approval of the plan by the division chief, the investigator can begin the investigation. Also, the process now includes a requirement for a team directional guidance meeting when the investigation is 50- to 75-percent complete. At this meeting, the investigator briefs a small group of analysts from within the investigator’s division on the investigation’s scope, direction, and findings to that point. The purpose of the meeting is to identify at an early stage any problems in the direction of the investigation and to identify any major information sources that are not being used. According to OSAGWI, each case investigation is now periodically reviewed by the Director of the Investigations and Analysis Directorate to allow the Director to adjust, as necessary, the scope of the investigation and the case narrative development. Furthermore, the peer review process for case narratives is now more robust because the peer review team, comprising experienced individuals, reviews the completed case narrative along with the source materials. The peer reviewers are responsible for ensuring that the text in the case narrative is supported by the source material and also for identifying portions of the text needing footnotes to source materials. In addition, an OSAGWI official said the internal review of case narratives by key individuals within the OSAGWI organization is more rigorous than it used to be. OSAGWI officials believe that these enhancements to their review processes will preclude the recurrence of the types of investigative and reporting weaknesses we found. Conclusions The weaknesses in the scope and quality of OSAGWI’s investigations and in reporting the results of these investigations in the Reported Exposure to Mustard Agent, Marine Minefield Breaching, and Al Jubayl case narratives are significant; however, we agree with OSAGWI’s assessments of the likelihood of the presence of chemical warfare agents in all but the Marine Minefield Breaching case narrative. In our opinion, the lack of effective quality assurance policies and practices within OSAGWI contributed to the weaknesses we noted. A stronger quality control mechanism for its investigations would provide greater assurance that all relevant facts are included and that the information presented is accurately and properly sourced. More consistent use of some types of medical information would also strengthen the rigor of OSAGWI’s investigations. By querying available medical databases for all cases, OSAGWI investigators might have been able to better determine whether personnel at or near the sites of incidents had reported or been diagnosed with unusual health problems, thus helping indicate whether increased investigative efforts regarding the potential presence of chemical warfare agents or other environmental hazards in these incidents might be appropriate. OSAGWI’s changes to its internal review process appear to be positive steps in ensuring the quality of investigations and the related case narrative reports. Because OSAGWI initiated these changes after the case narratives we reviewed were published, we could not determine their effectiveness in ensuring the quality of OSAGWI investigations and reports. However, the procedures should incorporate two features to enhance the credibility of the review process. First, it is critical that those named to review OSAGWI’s investigations are independent of the team investigating the incidents to avoid the appearance of a conflict of interest. Second, it is important that the procedures in place lead reviewers to thoroughly check to ensure that all relevant information obtained by the investigation teams has been included in the case narrative reports, that all important leads have been pursued, and that the investigation team has reached conclusions that are fully substantiated by the facts. Information about the potential for differences in the occurrence of Gulf War illnesses symptoms between NMCB-24 and NMCB-40 developed during the Al Jubayl case investigation was not shared with the Naval Health Research Center for consideration for inclusion in its ongoing Gulf War illnesses research. We believe this information has potential for use in helping DOD evaluate issues related to the high levels of health problems reported by many of the Seabees stationed at Al Jubayl during the Gulf War. Recommendations To ensure that OSAGWI’s case narratives contain all relevant facts, we recommend that the Secretary of Defense direct the Special Assistant for Gulf War Illnesses to revise the Marine Minefield Breaching, Exposure to Mustard Agent, and Al Jubayl, Saudi Arabia, case narratives to reflect the new and/or unreported information noted in our report and examine whether it should change its conclusion about the likelihood of the presence of chemical warfare agents in the Marine Minefield Breaching case from “unlikely” to “indeterminate” in light of the additional information now known about this case. To enhance the thoroughness of OSAGWI’s investigative and reporting practices, we recommend that the Secretary of Defense direct the Special Assistant for Gulf War Illnesses to use the DOD and VA Gulf War clinical databases to assist in designing the nature and scope of all OSAGWI investigations; include relevant medical information in its case narratives where it is needed to fully explain incidents of possible exposure to chemical agents or other potential causes of Gulf War illnesses; and ensure that its internal review procedures provide that (1) those reviewing an investigation and related report are independent of the team investigating the incident and (2) steps are in place that will lead the reviewers to thoroughly check that all relevant information obtained by the investigation teams has been included in the case narrative reports, all conclusions have been fully substantiated by the facts, and that all logical leads have been pursued. Because of the potential research value of information developed through OSAGWI investigations, we further recommend that OSAGWI contact the Naval Health Research Center regarding the usefulness and desirability of comparing data between the veterans of NMCB-24 and NMCB-40 for purposes such as helping to determine whether veterans of these two units are reporting the same types and numbers of symptoms. Agency Comments and Our Evaluation DOD generally concurred with a draft of this report, agreeing to revise the case narratives we reviewed to include new or unreported data, and to reassess case narrative findings based upon any new evidence. In particular, DOD agreed to update the Marine Minefield Breaching case to reflect new information, conduct additional analysis on the issue of artillery fire during the breaching operation, and reassess its conclusions as appropriate. DOD disagreed with our proposed use of the CCEP and the VA Gulf War Health Examination Registry in OSAGWI investigations. In commenting on this report, DOD stated it was concerned that these databases might be inappropriately used to establish a causal relationship between an event and the medical findings of the registries. DOD therefore maintains it would be inappropriate for case investigations, which were designed to report simply on what happened on the battlefield, to make assumptions about the significance or validity of the data in these databases without the establishment of a causal association by scientific research. DOD also stated concerns about preempting scientific research in this area and drawing premature conclusions that would be fallacious. However, DOD agreed that these databases need to be examined and analyzed for what they can contribute to understanding the illnesses of Gulf War veterans, and noted that the Department has been involved in a number of research and other analyses of these databases. We agree that information from these databases should not be used by investigators to establish a causal association and/or conclusions as described by DOD, and did not intend that it should be used for this purpose. We also agree that the establishment of Gulf War illnesses causal relationships is most appropriately a research activity. However, we also believe that the VA and DOD databases could potentially provide relevant information to the investigator about whether individuals who were at or near a site under investigation are reporting health problems, and that this information could be appropriately used, when combined with other information, to help guide the nature and scope of OSAGWI investigations. For example, case investigators could use VA Registry and CCEP data, particularly where it shows that large numbers of individuals at or near a given site are reporting health problems, as an indicator for providing investigative leads and for use in establishing the nature and scope of an investigation. This does not mean, as implied in DOD’s comments, that such use of these databases would entail routine inclusion of the reviewed data in the published case narratives, their use as a replacement for research activities, or that its use would result in interpretations of non-scientifically based cause and effect relationships. We believe that these databases can be used by investigators to help guide and scope their efforts without entailing the types of misuse described by DOD. We modified the final report text and recommendatons to clarify our position regarding this finding. DOD agreed that the Al Jubayl case narrative needed to be modified to place the events of this incident in fuller context, and that this would include that some servicemembers stationed at Al Jubayl, especially members of NMCB-24, have reported high levels of health problems. DOD also agreed to request that the Naval Health Research Center undertake an analytical comparison regarding NMCB-24 and NMCB-40, and that independent reviewers are critical to a thorough and acceptable report on OSAGWI investigations. VA also disagreed with our proposed use of the CCEP and the VA Gulf War Health Examination Registry in OSAGWI investigations in its written comments on a draft of this report. VA’s comments were similar to DOD’s regarding this matter. VA also expressed doubts regarding the usefulness to research of data comparisons involving NMCB-24 and NMCB-40. Additional discussion of DOD’s and VA’s comments and our evaluation is included in appendixes I and II.
Pursuant to a congressional request, GAO provided information on the: (1) Department of Defense's (DOD) progress in establishing an organization to address Gulf War illnesses issues; and (2) thoroughness of DOD's Office of the Special Assistant for Gulf War Illnesses' (OSAGWI) investigations into and reporting on incidents of veterans' potential exposure to chemical or biological warfare agents during the Gulf War. GAO noted that: (1) DOD has made progress in carrying out its mandate to comprehensively address Gulf-War illnesses-related issues; (2) it has assisted veterans through its outreach program by clearing large backlogs of veterans' inquiries, using a toll-free hot line, setting up a Web site, and publishing a newsletter; (3) in addition, it has assisted veterans in obtaining medical examinations and other services at DOD and Department of Veterans Affairs (VA) facilities; (4) through the course of its investigations and other work, OSAGWI has identified needed improvements in DOD's equipment, policies, and procedures and has worked with various DOD agencies to implement changes designed to provide better protection to U.S. servicemembers on a contaminated battlefield; (5) OSAGWI generally applied appropriate investigative procedures and techniques in conducting its work; (6) however, GAO found that three of the six case narratives it reviewed contained weaknesses such as failures to follow up with appropriate individuals to confirm key evidence, to identify or ensure the validity of some evidence, to include some important information, and to interview some key witnesses; (7) in the remaining three cases, OSAGWI conducted its investigations without evidence of these weaknesses; (8) in all six cases, OSAGWI missed an opportunity to perform more complete investigations because it did not take advantage of potentially valuable sources of relevant information in DOD and VA clinical databases; (9) GAO does not know whether the investigatory and reporting weaknesses it found in its review of these six cases might also exist in the cases that OSAGWI later investigated; (10) despite these weaknesses, GAO agreed with OSAGWI's conclusions about the likelihood of the presence of chemical warfare agents in five of the six cases it reviewed; (11) the one exception involved a potential exposure of U.S. Marine Corps personnel to a chemical warfare agent during a minefield breaching operation; (12) OSAGWI concluded that exposure in this case was unlikely; (13) however, GAO found that OSAGWI had overlooked some information it had in its possession and also did not include all relevant information in its case narrative; (14) after reviewing the overlooked information and considering all relevant information OSAGWI had in its files, GAO believes that OSAGWI should reassess the likelihood of exposure in this case; and (15) GAO believes that the lack of effective quality assurance policies and practices in OSAGWI's investigating and reporting processes contributed to the weaknesses noted.
Background At the end of fiscal year 1996, the Air Force reported that it was managing inventory valued at $29.3 billion. DOD uses a coding system to categorize the condition of its inventory. These codes are intended to indicate whether stored inventory is (1) issuable without qualification, (2) in need of repair, (3) usable for only a limited time, or (4) unrepairable and ready for disposal. DOD’s inventory management goal is to achieve a cost-effective system that provides the inventory needed to maintain readiness. When items in DOD’s inventory cannot be readily placed in one of these categories, DOD uses other condition codes to indicate suspended inventory. Because these codes do not indicate an item’s usability, item managers must direct that the item be inspected or tested to determine its usability. The primary suspended inventory condition codes are as follows: J — inventory at storage warehouses that is awaiting inspection to determine its condition (hereafter referred to as material in inventory), K — inventory returned from customers or users to storage warehouses and awaiting condition classification (hereafter referred to as customer returns), L — inventory held at storage warehouses pending litigation or negotiation with contractors or common carriers (hereafter referred to as inventory in litigation), Q — quality-deficient inventory returned by customers or users due to technical deficiencies (hereafter referred to as quality-deficient inventory), and R — inventory returned by salvage activities that do not have the capability to determine the material condition (hereafter referred to as reclaimed inventory). Appendix II contains a detailed explanation of DOD’s supply condition codes. Inventory categorized as suspended is not available for use until it has been tested to determine whether it is usable. In some instances, inventory in this category that has been found to be usable can meet customer needs, thus contributing to overall military capability. DOD recognizes that inventory in a suspended status for long periods can adversely affect the availability of resources and the effectiveness and economy of supply operations. To minimize the amount of items in suspended inventory, DOD set standards for the amount of time inventory should remain categorized as suspended. These standards consider the reason for suspending the inventory and the difficulty of determining the usability of the items. The time standards by suspension category are shown in table 1. A number of organizations are involved in the management and control of suspended inventory. The Air Force Materiel Command (AFMC) administers the Air Force supply system and provides suspended inventory management policies and procedures. AFMC has five Air Logistics Centers (ALC) that are located in different regions throughout the United States.Within each ALC, item managers are responsible for maintaining the records for suspended inventory, initiating efforts to determine the usability of suspended inventory, deciding whether to procure items in addition to those in suspended status, and deciding whether suspended items should be returned to inventory or disposed. Suspended inventory is stored at warehouses operated and managed by the Defense Logistics Agency (DLA). These storage activities receive, store, and issue inventory and maintain inventory records. Once the usability of suspended inventory has been determined, storage activities reclassify the inventory as ready for issue, in need of repair, or ready for disposal. Reported Value of Suspended Inventory Is Over $3 Billion DOD reported that about $3.3 billion of secondary items was in a suspended status between April and June 1997. Figure 1 shows the distribution of the reported value of suspended inventory among DOD components. The Warner Robins ALC accounted for about $1.3 billion (53 percent) of the Air Force’s suspended inventory. Figure 2 summarizes the value of suspended inventory by ALC, and figure 3 shows the value of suspended inventory by condition code at Warner Robins. Appendix III contains additional details on the quantity and value of suspended inventory items. Ineffective Management Can Increase Costs and Reduce Readiness Significant management weaknesses exist for inventory categorized as suspended. The Air Force is not reviewing the status of these items in a timely manner and has miscategorized a significant amount of inventory. As a result, the Air Force is likely incurring unnecessary logistics costs and missing opportunities to support operational units’ needs in a timely manner. At Warner Robins, a substantial number of items failed to meet time standards for inspection. As a result, items that may have been needed for use in the supply system were not being considered for use. We reviewed 1,971 judgmentally selected suspended inventory items, valued at about $67 million, to determine the length of time the inventory remained in a suspended status. Of the 1,820 sample items with standards, valued at $65.8 million, 1,757 items failed to meet the applicable DOD time standards. The remaining 151 sample items without time standards remained in suspension, with times ranging from 22 days to over 8 years. Figure 4 summarizes the number of sample items that met or failed to meet DOD time standards, and figure 5 shows the time items remained in a suspended status by suspension category. Appendix III contains specific details of our analysis. Timely Reviews of Suspended Inventory May Preclude Unnecessary Repairs The Air Force may unnecessarily invest millions of dollars to send some inventory for repair when the need may have been met from inventory in suspension. Since Warner Robins was not making timely reviews of its inventory in suspension, usable items may have existed in that category that could have been used to meet supply system demands. Our review indicated that Warner Robins officials had improperly identified 3,418 customer return items, worth $115 million, as inventory in need of repair. Because these items were improperly identified as needing repair, Warner Robins officials did not inspect them to determine their usability, which in turn meant that the Air Force may have incurred costs to repair other items when usable items were actually in suspension. We were not able to determine the value of these unnecessary repair costs. Suspended Inventory Is Often Not Considered as a Way to Satisfy Critical Operational Unit Demands Inventory managers have missed opportunities to fill orders with usable items because of the untimely handling of suspended inventory. As a result, suspended inventory is not available for use when needed by customers. When demands are made on the supply system and assets are not available to fill those demands, backorders result. For the suspended items in our sample, Warner Robins had over 2,000 concurrent backorders, worth about $53 million. About 65 percent of these backorders were essential to a weapon system’s operation and thus adversely affected the system’s ability to carry out all or portions of its assigned operational missions. If the duration of suspensions had been monitored and usability had been determined within a reasonable amount of time, over 500 of our sample items, worth about $7 million, could have been used to fill some of the backorders, as shown in table 2. The following examples show how weaknesses in the management of suspended inventory can affect access to potentially usable inventory: Warner Robins had four data entry keyboards on backorder—two of which were classified as mission critical. The keyboards, valued at $16,000 each, are used on B-52H aircraft. Warner Robins inventory records showed two keyboards (see fig. 6) had been suspended in reclaimed inventory for over 2 years. In August 1997, two B-52H aircraft were not fully operational (i.e., unable to fly portions of their missions) due to the unavailability of these keyboards. One aircraft had been unable to fly portions of its mission for 175 days and the other for 24 days. At the time of our visit, the item manager had not taken action to resolve the status of the keyboards. Warner Robins had 11 signal converters on backorder—all of which were classified as mission critical. The converters, valued at $36,000 each, are used on the B-52H aircraft. Warner Robins inventory records showed three converters (see fig. 7) had been in reclaimed inventory for 2 years, from June 1995 to June 1997. In June 1997, two B-52H aircraft were not operational (i.e., grounded and unable to fly any portion of their missions) due to the unavailability of these converters. One aircraft had been grounded for 33 days and the other for 6 days. After we brought this matter to the attention of Warner Robins officials, they informed us that testing would be performed on the three converters in reclaimed inventory to determine their potential use in satisfying backorders. Maintaining Unneeded Inventory Increases Storage Costs Inventory that cannot be applied to any foreseeable need is declared excess and subject to disposal action. Warner Robins reported over 5,300 items on hand, worth over $184 million, as excess for the sample items we reviewed. Prompt disposal of such unneeded items can reduce suspended inventory and reduce inventory holding costs. Maintaining inventory that is not needed is expensive and does not contribute to an effective, efficient, and responsive supply system. DLA and private industry organizations have previously estimated that holding costs ranged from less than 1 to 15 percent or higher of an item’s inventory value. Although it is difficult to determine the precise costs to manage and maintain excess stocks, our review indicates that these costs would be millions of dollars each year. Weak Management Controls Exist for Inventory in Suspended Categories AFMC and the Warner Robins ALC lack adequate internal management controls over suspended inventory. A number of factors contributed to delays in resolving the status of suspended inventory and prolonged inventory suspensions. First, AFMC guidance hampers the proper identification, timely inspection, and prompt reclassification of suspended inventory. Second, Warner Robins lacks local policies and procedures that prescribe levels of responsibility and accountability for managing suspended material. Third, AFMC and Warner Robins do not provide adequate oversight and monitoring of suspended inventory. AFMC Guidance Results in Improper Classifications and Untimely Resolution AFMC supplemental guidance enabled $846 million of inventory in need of repair stored at Warner Robins to be improperly assigned to the customer returns suspension code, thus overstating the magnitude of the Air Force’s and Warner Robins’ suspended inventory. Although our review was limited to Warner Robins, the remaining four ALC’s are also required to comply with the supplemental policy. Consequently, the magnitude of the suspended inventories at the other ALCs may also be overstated. According to DOD policy, material returned in an unknown condition by a customer should be assigned to customer returns and reclassified within 10 days. AFMC supplemental guidance, on the other hand, states that two-level maintenance items returned for repair should be assigned to this same category. When we informed AFMC officials that both customer returns and repair items were commingled in the customer returns suspension code, one official acknowledged that items not in need of repair may not receive management attention. When we brought this same matter to the attention of Warner Robins officials, they told us that, in complying with the supplemental guidance, they assumed all items (including $115 million worth of items in an unknown condition that were returns from customers) were in need of repair, and thus made no attempts to inspect and reclassify them. At Warner Robins, none of the 31 customer returns we reviewed met the 10-day DOD time standard; in fact, 17 of the customer returns had been suspended for over 1 year. Waiver Guidance Raises Questions DOD policy for managing reclaimed inventory states that these items should be reclassified in 180 days. AFMC supplemental guidance waives the standard because of a shortage of repair funds that hindered item managers’ ability to schedule reclaimed inventory for inspection within the 180-day limit. However, waiving the standard exacerbates existing problems with lengthy suspensions. At Warner Robins, 99 percent of the 990 reclaimed inventory items we sampled remained suspended more than 180 days, and 62 percent of the inventory had been suspended over 2 years. Table 3 shows the number of reclaimed inventory items that had been suspended for more than 2 years. Warner Robins Lacks Suspended Inventory Procedures Warner Robins lacks specific procedures for resolving the status of items, assigning responsibility for carrying out these procedures, and prescribing related accountability. Air Force policy indicates that ALCs are responsible for preparing comprehensive, explicit instructions essential to effectively manage inventory. Warner Robins item managers and DLA warehouse personnel did not agree as to who within their organizations is responsible for resolving suspended inventory. Item managers told us that warehouse personnel are responsible for taking the necessary actions to monitor reclassification of suspended inventory because those personnel have physical possession of the material. Warehouse personnel told us that item managers must direct disposition of suspended material. Consequently, neither level assumed responsibility. When we pointed out the need for clearly defined responsibilities to Warner Robins top management officials, they told us that item managers are responsible for resolving suspended inventory issues and indicated that Warner Robins would begin drafting suspended inventory regulations for its item managers. Suspended Inventory Reclassification Efforts Are Not Monitored DOD policy requires periodic reviews of suspended inventory items to ensure that their usability is determined in a timely manner. However, this requirement is not carried out. For the majority of our sample items, the item managers could not tell us why the items had been suspended or who had directed suspension and could not easily determine how long the items had been suspended. Warner Robins officials told us they do not monitor the age of suspended inventory, even though DOD policy requires that monitoring be done to keep within prescribed time limits. Warner Robins officials stated that they did not regularly compile data on the quantity, value, or length of time material is suspended or report such data to AFMC because resolving suspended items’ status was not a high priority. Further, AFMC officials told us that they have not monitored suspended inventory management since the late 1980s. Adequate management oversight could have highlighted prolonged suspensions and indicated the necessity for routine monitoring of the quantity, value, and length of time items are suspended. If Warner Robins had monitored the duration of some suspensions, their usability could have been resolved within a reasonable time. For example: In May 1986, in anticipation of a patent infringement litigation, an item manager was instructed to retain records and files involving a supplier of M-16 rifle conversion kits for 20 years. At the time of our visit, one M-16 rifle conversion kit (see fig. 8) had been suspended for almost 9 years. An additional 985 kits were being held in an issuable condition, according to the item manager. Subsequent to our visit, we were informed that the item manager misinterpreted the retention instructions. Rather than just retaining the records and files, the item manager had also been unnecessarily holding all 986 kits. The item manager informed us that all 986 kits are excess and initiated action to dispose of them. According to warehouse records, one electron tube worth $2,400 had been suspended in litigation for 362 days. The item manager did not know why the item was suspended, who suspended the item, or when the item was placed in suspension. However, Warner Robins warehouse records showed that the tube had been returned by a customer because it was not the item requested from supply. When warehouse personnel realized that the serviceable item was being erroneously held in litigation, they reclassified the electron tube to an issuable condition. Four digital computers for the F-4G aircraft had been suspended in reclaimed inventory for over 4 years. According to the item manager, there has been little or no demand for the computers, valued at $73,300 each, because in 1996 the F-4G aircraft was taken out of service. As a result of our findings, the item manager informed us that the digital computers would be recommended for disposal. Suspended Inventory Management Weaknesses Have Not Been Identified in Financial Integrity Act Assessments The Federal Managers’ Financial Integrity Act of 1982 requires agency heads to assess their internal controls annually and report their findings to the President and the Congress. The Air Force provides its assessments to DOD for inclusion in the Secretary of Defense’s report to the Congress. We reviewed internal control assessments by Warner Robins, AFMC, and the Air Force to determine if the Air Force had reported suspended inventory management by ALCs as a material weakness and found that it had not. One criterion for determining whether an internal control weakness is material is if it significantly weakens safeguards against waste. The problems we identified demonstrate that suspended inventory management is vulnerable to waste and warrants special emphasis in future Financial Integrity Act assessments. Conclusions The management of DOD’s inventory of spare parts and other secondary items has been considered a high-risk area for several years. Therefore, DOD’s reported $3.3 billion suspended inventory is a problem that warrants management attention. In terms of reported dollar value of suspended inventory, the Air Force represents the biggest problem among the services; within the Air Force, the Warner Robins ALC accounts for the largest share. At Warner Robins, we found significant weaknesses in its management of suspended inventory. Since there are standard policies for managing suspended inventory items across the ALCs and the weaknesses in the process contribute to some of the problems we identified, other ALCs may have similar problems. Air Force and DOD officials have generally stated, and our review confirmed, that ineffective management and delays in determining the usability of suspended inventory can result in increased logistics and support costs and affect readiness. At Warner Robins, (1) item managers generally were not complying with DOD standards for determining the usability of suspended inventory items, (2) about 64 percent of the items we sampled had been in the suspended category for more than 1 year and some longer than 6 years, (3) item managers were following AFMC guidance that does not comply with DOD and Air Force policy, (4) written procedures for controlling suspended inventory were lacking, and (5) management oversight of suspended inventory was limited. Further, neither Warner Robins nor the Air Force has identified suspended inventory as a material management weakness under the Federal Managers’ Financial Integrity Act. Recommendations To improve the management of suspended items, we recommend that the Secretary of Defense direct the Secretary of the Air Force to ensure that, at Warner Robins (1) suspended inventory is properly identified, monitored, inspected, and classified within established DOD timeframes and (2) suspended items receive adequate visibility at all management levels, up to and including the service headquarters, through targeting suspended inventory problems as an issue for review in the Federal Managers’ Financial Integrity Act assessments. Also, we recommend that the Secretary of the Air Force direct Warner Robins ALC to establish explicit guidance on responsibility and accountability for resolving suspended inventory status, carry out necessary actions, and follow up to make sure that the actions have been promptly and correctly taken. Finally, we recommend that the Secretary conduct assessments of suspended inventory practices at the four other ALCs to determine the need for similar remedial actions and direct any affected ALC to take such actions. Agency Comments In written comments on a draft of this report, DOD agreed with our recommendations (see app. IV). DOD stated that on November 13, 1997, Air Force Headquarters provided guidance to the Air Force Materiel Command requesting a plan to correct deficiencies in managing suspended stock and initiate aggressive corrective actions. The plan is due to the Air Force by mid-December 1997. We are sending copies of this report to other appropriate congressional committees, the Secretaries of Defense and the Air Force, and the Director of the Office of Management and Budget. Please contact me at (202) 512-8412 if you have any questions concerning this report. Major contributors to this report are listed in appendix V. Scope and Methodology To quantify the number and value of the Department of Defense’s (DOD) suspended inventory, we obtained computerized inventory records of inventory between April 1997 and June 1997 in suspended condition codes at all military services and Defense Logistics Agency (DLA) inventory control points. We removed surcharges covering the costs to operate the supply system, and we revalued the suspended inventory at the latest acquisition cost. These databases generate the records, statistics, and reports that DOD uses to manage its inventories, make decisions, and determine requirements. We did not independently verify the accuracy of the military services’ and DLA’s inventory databases from which we obtained data. Therefore, our report notes that these data are reported values. With the use of the inventory records, we identified the Air Force and Warner Robins Air Logistics Center (ALC) as the DOD component and its inventory control activity with the highest reported dollar value of suspended items. At Warner Robins, we reviewed a judgmental sample of 1,971 suspended items (valued at $67 million and representing 101 different inventory numbers). We excluded depot-level repairables suspended in the repair cycle process (M condition) from our review because this status is a normal condition for this type of material and the items are routinely considered as assets in the requirement computations of the inventory control activities. We also excluded suspended ammunition (N condition) because this inventory is held for emergency combat use. We reviewed policies and procedures and obtained other relevant data related to suspended inventory management from officials at the DLA Headquarters, Alexandria, Virginia; Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; and Warner Robins ALC and Defense Distribution Depot, Georgia. To determine the age of our sample items, we held discussions with item managers and reviewed storage activity data and inventory records. To learn whether issues associated with suspended items were promptly resolved and the reasons for delays in resolving the inventory status of suspended items, we reviewed Air Force and Warner Robins implementing guidance and assessments of internal controls. Such information provided the basis for conclusions regarding the management of suspended inventory. To determine if the Air Force had emphasized suspended inventory management as part of its assessment of internal controls, we reviewed assessments from Warner Robins for fiscal years 1993-97, Air Force Materiel Command for fiscal years 1995-96, and the Air Force Headquarters for fiscal years 1993-96. To assess the accuracy of data maintained for our sample items, we reviewed the results of several recent Warner Robins inventory accuracy assessments. To ensure the accuracy of inventory records for our sample items, we obtained additional evidence from Warner Robins item managers and warehouse personnel. Consequently, we are confident that our findings represent material conditions for the items we reviewed. We performed our review between April and October 1997 in accordance with generally accepted government auditing standards. Supply Condition Codes New, used, repaired, or reconditioned materiel that is serviceable and issuable to all customers without limitation or restriction. Serviceable (issuable with qualification) New, used, repaired, or reconditioned materiel that is serviceable and issuable for its intended purpose but is restricted from issue to specific units, activities, or geographical areas by reason of its limited usefulness or short service life expectancy. Serviceable (priority issue) Items that are serviceable and issuable to selected customers but must be issued before supply condition codes A and B materiel to avoid loss as a usable asset. Serviceable materiel that requires test, alteration, modification, technical data marking, conversion, or disassembly, not including items that must be inspected or tested immediately before issue. Materiel that involves only limited expense or effort to restore to serviceable condition and is accomplished in the storage activity in which the stock is located. The materiel may be issued to support ammunition requisitions coded to indicate acceptability of usable stock. Unserviceable (reparable) Economically reparable materiel that requires repair, overhaul, or reconditioning, including reparable items that are radioactively contaminated. Unserviceable (incomplete) Materiel requiring additional parts or components to complete before issue. Unserviceable (condemned) Materiel that has been determined to be unserviceable and does not meet repair criteria. Materiel in stock that has been suspended from issue, pending condition classification or analysis, when the true condition is not known. Materiel returned from customers or users and awaiting condition classification. Materiel held pending litigation or negotiation with contractors or common carriers. Materiel that has been identified on an inventory control record but turned over to a maintenance facility or contractor for processing. Suspended (ammunition suitable for emergency combat use only) Ammunition stocks suspended from issue except for emergency combat use. Unserviceable (reclamation) Materiel that is determined to be unserviceable and uneconomically reparable, as a result of physical inspections, teardown, or engineering decision, but contains serviceable components or assemblies to be reclaimed. Quality-deficient exhibits returned by customers or users as directed by the Integrated Materiel Manager, due to technical deficiencies reported by Quality Deficiency Reports. (This code is for intra-Air Force use only.) Suspended (reclaimed items awaiting condition determination) Assets turned in by reclamation activities that do not have the capability (e.g., skills, personnel, or test equipment) to determine materiel condition. Actual condition will be determined before induction into maintenance activities for repair or modification. Materiel that has no value except for its basic materiel content. Additional Information on Suspended Material and DOD Time Standards Table III.1 shows the reported quantity and value of suspended inventory items by ALC, and table III.2 shows this information specifically for Warner Robins ALC. Table III.3 shows the number of items in our sample that met or failed to meet DOD time standards, and table III.4 shows the number of items that were in a suspended status at the time of our review and the amount of time that the items were suspended. Material in inventory (J) Customer returns (K) Inventory in litigation (L) Quality-deficient inventory (Q) Reclaimed inventory (R) Table III.3: Our Analysis of Sample Items That Met or Failed to Meet DOD Time Standards Time standard (in days) Comments From the Department of Defense Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Norfolk Field Office Kansas City Field Office Related GAO Products High-Risk Series: Defense Inventory Management (GAO/HR-97-5, Feb. 1997). Defense Logistics: Much of the Inventory Exceeds Current Needs (GAO/NSIAD-97-71, Feb. 28, 1997). Defense Inventory: Spare and Repair Parts Inventory Costs Can Be Reduced (GAO/NSIAD-97-47, Jan. 17, 1997). Logistics Planning: Opportunities for Enhancing DOD’s Logistics Strategic Plan (GAO/NSIAD-97-28, Dec. 18, 1996). 1997 DOD Budget: Potential Reductions to Operation and Maintenance Program (GAO/NSIAD-96-220, Sept. 18, 1996). Defense IRM: Critical Risks Facing New Materiel Management Strategy (GAO/AIMD-96-109, Sept. 6, 1996). Navy Financial Management: Improved Management of Operating Materials and Supplies Could Yield Significant Savings (GAO/AIMD-96-94, Aug. 16, 1996). Defense Logistics: Requirements Determinations for Aviation Spare Parts Need to Be Improved (GAO/NSIAD-96-70, Mar. 19, 1996). Defense Inventory: Opportunities to Reduce Warehouse Space (GAO/NSIAD-95-64, May 24, 1995). Defense Supply: Inventories Contain Nonessential and Excessive Insurance Stocks (GAO/NSIAD-95-1, Jan. 20, 1995). Army Inventory: Unfilled War Reserve Requirements Could Be Met With Items From Other Inventory (GAO/NSIAD-94-207, Aug. 25, 1994). Air Force Logistics: Improved Backorder Validation Procedures Will Save Millions (GAO/NSIAD-94-103, Apr. 20, 1994). Air Force Logistics: Some Progress, but Further Efforts Needed to Terminate Excess Orders (GAO/NSIAD-94-3, Oct. 13, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) secondary inventory management, focusing on the: (1) reported quantity and value of suspended inventory; (2) weaknesses in managing suspended inventory and their potential effect on logistics support costs and readiness; and (3) reasons why suspended inventory is not well managed. GAO noted that: (1) significant management weaknesses exist in the Air Force's management of inventory that it categorizes as suspended; (2) as a result, the Air Force is vulnerable to incurring unnecessary repair and storage costs and avoidable unit readiness problems; (3) this situation exists largely because management controls are not being implemented effectively or are nonexistent; (4) among DOD components, the Air Force reported the largest amount of suspended inventory--more than 70 percent of the $3.3 billion of all DOD suspended inventory; (5) in April 1997, the Air Force had 403,505 secondary items, valued at $2.4 billion, in a suspended status; (6) the Warner Robins Air Logistics Center (ALC) had the highest reported value of suspended inventory, accounting for about $1.3 billion (53 percent) of the Air Force's suspended inventory; (7) the vast majority of the suspended items reviewed are not being reviewed in a timely manner; (8) of the 1,820 suspended items reviewed with established standards, 97 percent failed to meet these standards; (9) about 64 percent of the inventory reviewed had been in a suspended category for over 1 year, and some had been suspended for over 6 years; (10) delays in determining the usability of suspended inventory can result in increased logistics support costs and readiness problems; (11) Warner Robins had over 2,000 unfilled customer demands (valued at about $53 million) while similar items were in suspension; (12) over 500 of these unfilled demands (valued at about $7 million) could have potentially been filled with these items; (13) two B-52H aircraft had not been fully operational for 175 days and 24 days because two $16,000 data entry keyboards were not available for issue in the Air Force supply system, yet two such keyboards had been maintained in a suspended status for two years; (14) management controls at Warner Robins over items categorized as suspended inventory have broken down and contributed to inventory being in a suspended status beyond established timeframes; (15) Air Force Materiel Command guidance does not comply with DOD policy and safeguard against lengthy suspensions, and Materiel Command and Warner Robins oversight of inventory management has generally been nonexistent; (16) Warner Robins lacks clearly defined suspended inventory management procedures for, and sufficient emphasis on, controlling suspended inventory; and (17) further, management of suspended inventory has not been identified in Air Force assessments of internal controls as a significant weakness, as provided in the Federal Managers' Financial Integrity Act of 1982.
Background The Toxic Substances Control Act was enacted in 1976 to provide EPA with the authority, upon making certain determinations, to collect information about the hazards posed by chemical substances and to take action to control unreasonable risks by either preventing dangerous chemicals from making their way into use or placing restrictions on those already in commerce. TSCA authorizes EPA to review chemicals already in commerce (existing chemicals) and chemicals yet to enter commerce (new chemicals). EPA lists chemicals in commerce in the TSCA inventory. Of the over 83,000 chemicals currently in the TSCA inventory, about 62,000 were already in commerce when EPA began reviewing chemicals in 1979. Since then, over 21,000 new chemicals were added to the inventory and are now in use as existing chemicals. To assess risks, EPA examines a chemical’s toxicity or potential adverse effects and the amount of human and environmental exposures. TSCA generally requires the industry to notify EPA at least 90 days before producing or importing a new chemical. These notices contain information, such as the chemical’s molecular structure and intended uses that EPA uses to evaluate the chemical’s potential risks. TSCA also authorizes EPA to promulgate rules to require manufacturers to perform tests on chemicals in certain circumstances or provide other data, such as production volumes, on existing chemicals. In addition, TSCA requires chemical companies to report to EPA any data that reasonably support a conclusion that a chemical presents a substantial risk. If EPA finds that a chemical’s risks are unreasonable, it can prohibit or limit its production, processing, distribution, use, and disposal or take other action, such as requiring warning labels on the substance. While TSCA authorizes EPA to release chemical information obtained by the agency under the act, TSCA provides that certain information, such as data disclosing chemical processes, can be claimed as confidential business information by chemical manufacturers and processors. EPA generally must protect such information against public disclosure unless such disclosure is necessary to protect against an unreasonable risk of injury to health or the environment. Like the United States, the European Union has laws and regulations governing the manufacturing and use of chemicals. However, the EU has recently revised its chemical control policy through legislation known as Registration, Evaluation and Authorization of Chemicals (REACH). REACH went into effect in June 2007, but full implementation of all the provisions of REACH will be phased in over an 11-year period. Under REACH, authority exists to establish restrictions for any chemical that poses unacceptable risks and to require authorization for the use of chemicals identified as being of very high concern. These restrictions could include banning uses in certain products, banning uses by consumers, or even completely banning the chemical. Authorization will be granted if a manufacturer can demonstrate that the risks from a use of the chemical can be adequately controlled or that the socioeconomic benefits outweigh the risks and that there are no suitable alternatives. In addition, a key aspect of REACH is that it places the burden on manufacturers, importers, and downstream users to ensure that they manufacture, place on the market, or use such substances that do not adversely affect human health or the environment. Its provisions are underpinned by the precautionary principle. In general, the precautionary principle means that where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to reduce risks to human health and the environment. EPA Lacks Adequate Information on Potential Health and Environmental Risks of Toxic Chemicals While TSCA authorizes EPA to review existing chemicals, it generally provides no specific requirement, time frame, or methodology for doing so. Significantly, chemical companies are not required to develop and submit toxicity information to EPA on existing chemicals unless the agency finds that a chemical may present an unreasonable risk of injury to human health or the environment or is or will be produced in substantial quantities and that either (a) there is or may be significant or substantial human exposure to the chemical or (b) the chemical enters the environment in substantial quantities. EPA must also determine there are insufficient data to reasonably determine the effects on health or the environment and that testing is necessary to develop such data before it can require a company to test its chemicals for harmful effects. This structure places the burden on EPA to demonstrate a need for data on a chemical’s toxicity rather than on a company to demonstrate that a chemical is safe. As a result, EPA does not routinely assess the risks of the roughly 80,000 industrial chemicals in use. EPA has begun to rely on voluntary programs for data, such as the High Production Volume Challenge program, where companies voluntarily agree to provide EPA certain data on high-production volume chemicals. However, these programs may not provide EPA with complete data in a timely manner. For example, there are currently over 200 high-production- volume chemicals for which chemical companies have not voluntarily agreed to provide the minimal test data that EPA believes are needed to initially assess their risks. EPA officials told us that in cases where chemical companies do not voluntarily provide test data and health and safety studies in a complete and timely manner, requiring the testing of existing chemicals of concern—those chemicals for which some suspicion of harm exists—is the only practical way to ensure that the agency obtains the needed information. Furthermore, many additional chemicals are likely to become high production chemicals because the specific chemicals used in commerce are constantly changing, as are their production volumes. However, EPA officials told us that it is time-consuming, costly, and inefficient for the agency to use TSCA’s two-step process of (1) issuing rules under TSCA (which can take months or years to develop) to obtain exposure data or available test data that the chemical industry does not voluntarily provide to EPA and then (2) issuing additional rules requiring companies to perform specific tests necessary to ensure the safety of the chemicals tested. Officials also said that EPA’s authority under TSCA to issue rules requiring chemical companies to conduct tests on existing chemicals has been difficult to use because the agency must first make certain findings before it can require testing. Specifically, TSCA requires EPA to find that current data is insufficient; testing is necessary; and that either (1) the chemical may present an unreasonable risk or (2) that the chemical is or will be produced in substantial quantities and that there is or may be substantial human or environmental exposure to the chemical. Once EPA has made the required findings, the agency can issue a proposed rule for public comment, consider the comments it receives, and promulgate a final rule ordering chemical testing. EPA officials told us that finalizing rules can take from 2 to 10 years and require the expenditure of substantial resources. Given the time and resources required, the agency has issued rules requiring testing for only about 200 chemicals. Because EPA has used authority to issue rules to require testing so sparingly, it has not continued to maintain information on the cost of implementing these rules. However, in our October 1994 report on TSCA, we noted that EPA officials told us that issuing such a rule can cost hundreds of thousands of dollars. Given the difficulties involved in requiring testing, EPA officials do not believe that TSCA provides an effective means for testing a large number of existing chemicals. They believe that EPA could review substantially more chemicals in less time if they had the authority to require chemical companies to conduct testing and provide test data on chemicals once they reach a substantial production volume, assuming EPA had first determined that these data cannot be obtained without testing. We have long held a similar view based on our reviews involving TSCA. For example, in our in June 2005 report, we recommended that the Congress consider giving EPA the authority to require chemical manufacturers and processors to develop test data based on substantial production volume and the necessity for testing. We continue to believe that providing EPA with more authority to obtain test data from companies would enhance the effectiveness of TSCA. In contrast with TSCA’s provisions for obtaining information on chemicals, we found that REACH, the legislation through which the European Union has recently revised its chemical control policy, requires chemical companies to develop more information than TSCA on the effects of chemicals on human health and the environment. REACH generally requires that chemical companies provide to, and in some cases develop for, government regulators information on chemicals’ effects on human health and the environment, while TSCA generally does not. For example, under REACH, chemical companies provide information on chemicals’ properties and health and environmental effects for chemicals produced over specified volumes. REACH also provides regulators the general authority to require chemical companies to provide additional test data and other information when necessary to evaluate a chemical’s risk to human health and the environment. In contrast, TSCA places the burden on EPA to demonstrate that data on health and environmental effects are needed. Regarding new chemicals, TSCA generally requires chemical companies to notify EPA of their intent to manufacture or import new chemicals and to provide any available test data. Yet EPA estimates that most premanufacture notices do not include test data of any type, and only about 15 percent include health or safety test data. Chemical companies do not have an incentive to conduct these tests because they may take over a year to complete, and some tests may cost hundreds of thousands of dollars. Because EPA generally does not have sufficient data on a chemical’s properties and effects when reviewing a new chemical, EPA uses models to compare new chemicals with chemicals with similar molecular structures for which test data on health and environmental effects are available. EPA bases its exposure estimates for new chemicals on information contained in premanufacture notices. However, the anticipated production volume, uses, exposure levels, and release estimates outlined in these notices generally do not have to be amended once manufacturing begins. That is, once EPA completes its review and production begins, chemical companies are not required under TSCA to limit the production of a chemical or its uses to those specified in the premanufacture notice or to submit another premanufacture notice if changes occur. However, the potential risk of injury to human health or the environment may increase when chemical companies increase production levels or expand the uses of a chemical. TSCA addresses expanded uses of chemicals by authorizing EPA to promulgate a rule specifying that a particular use of a chemical would be a significant new use. However, EPA has infrequently issued such rules, which require manufacturers, importers, and processors of the chemical for the new use to notify EPA at least 90 days before beginning manufacturing or processing the chemical for that use. An option that could make TSCA more effective would be to revise the act to require companies to test their chemicals and submit the results to EPA with their premanufacture notices. Currently, such a step is required only if EPA makes the necessary findings and promulgates a testing rule. A major drawback to testing is its cost to chemical companies, possibly resulting in a reduced willingness to perform chemical research and innovation. To ameliorate such costs, or to delay them until the new chemicals are produced in large enough quantity to offset the cost of testing, requirements for testing could be based on production volume. For example, in Canada and the European Union, testing requirements for low-volume chemicals are less extensive and complex than for those for high-volume chemicals. Congress could give EPA, in addition to its current authorities under section 4 of TSCA, the authority to require chemical substance manufacturers and processors to develop test data based on, for example, substantial production volume and the necessity for testing. Another option would be to provide EPA with greater authority to require testing targeted to those areas in which EPA’s analysis models do not adequately predict toxicity. For example, EPA could be authorized to require such testing if it finds that it cannot be confident of the results of its analysis (e.g., when it does not have sufficient toxicity data on chemicals with molecular structures similar to those of the new chemicals submitted by chemical companies.) Under such an option, EPA could establish a minimal set of tests for new chemicals to be submitted at the time a chemical company submits a premanufacture notice for the chemical for EPA’s review. Additional and more complex and costly testing could be required as the new chemical’s potential risks increase, based on, for example, production or environmental release levels. According to some chemical companies, the cost of initial testing could be reduced by amending TSCA to require EPA to review new chemicals before they are marketed, rather than before they are manufactured. In this regard, according to EPA, about half of the premanufacture notices the agency receives from chemical companies are for new chemicals that, for various reasons, never enter the marketplace. Thus, requiring companies to conduct tests and submit the resulting test data only for chemicals that are actually marketed would be substantially less expensive than requiring them to test all new chemicals submitted for EPA’s review. Likewise, TSCA’s chemical review provisions could be strengthened by requiring the systematic review of existing chemicals. In requiring that EPA review premanufacture notices within 90 days, TSCA established a firm requirement for reviewing new chemicals, but the act contains no similar requirement for existing chemicals unless EPA determines by rule that they are being put to a significant new use. TSCA could be amended to establish a time frame for the review of existing chemicals, putting existing chemicals on a more equal footing with new chemicals. However, because of the large number of existing chemicals, EPA would need the flexibility to identify which chemicals should be given priority. TSCA could be amended to require individual chemical companies or the industry as a whole to compile and submit chemical data, such as that included in EPA’s High Production Volume (HPV) Challenge Program, for example, as a condition of manufacture or import above some specified volume. TSCA’s Regulatory Framework Impedes EPA’s Efforts to Control Toxic Chemicals While TSCA authorizes EPA to issue regulations that may, among other things, ban existing toxic chemicals or place limits on their production or use, the statutory requirements EPA must meet to do so present a legal threshold that has proven to be difficult for EPA. Specifically, in order to regulate an existing chemical under section 6 of TSCA, EPA must find that there is a reasonable basis to conclude that the chemical presents or will present an unreasonable risk of injury to health or the environment. EPA officials believe that demonstrating an unreasonable risk is a more stringent requirement than demonstrating, for example, a significant risk, and that a finding of unreasonable risk requires an extensive cost-benefit analysis. In addition, before regulating a chemical under section 6, the EPA Administrator must consider and publish a statement regarding the effects of the chemical on human health and the magnitude of human exposure to the chemical; the effects of the chemical on the environment and the magnitude of the environment’s exposure to the chemical; the benefits of the chemical for various uses and the availability of substitutes for those uses; and the reasonably ascertainable economic consequences of the rule, after consideration of the effect on the national economy, small business, technological innovation, the environment, and public health. Moreover, while TSCA offers EPA a range of control options when regulating existing chemicals—ban or restrict a chemical’s production, processing, distribution in commerce, or disposal or use, or require warning labels on the chemicals—EPA is required to choose the least burdensome requirement that will be adequately protective. For example, if EPA finds that it can adequately manage the unreasonable risk of a chemical by requiring chemical companies to place warning labels on the chemical, EPA may not ban or otherwise restrict the use of that chemical. EPA must also develop substantial evidence in the rulemaking record in order to withstand judicial review. Under TSCA, a court reviewing a TSCA rule “shall hold unlawful and set aside…if the court finds that the rule is not supported by substantial evidence in the rulemaking record.” As several courts have noted, the substantial evidence standard is more rigorous than the arbitrary and capricious standard normally applied to rulemaking under the Administrative Procedure Act. Further, according to EPA officials, the economic costs of regulating a chemical are usually more easily documented than the risks of the chemical or the benefits associated with controlling those risks, and it is difficult to show substantial evidence that EPA is promulgating the least burdensome requirement. EPA has had difficulty demonstrating that harmful chemicals pose an unreasonable risk and consequently should be banned or have limits placed on their production or use. In fact, since Congress passed TSCA nearly 33 years ago, EPA has issued regulations under the act to ban or limit or restrict the production or use of only five existing chemicals or chemical classes. Significantly, in 1991, EPA’s 1989 regulation broadly banning asbestos was largely vacated by a federal appeals court decision that cited EPA’s failure to meet statutory requirements. In contrast to the United States, the European Union, as well as a number of other countries, has banned all, or almost all, asbestos and asbestos-containing products. Asbestos, which refers to several minerals that typically separate into very tiny fibers, is a known human carcinogen that can cause lung cancer and other diseases if inhaled. Asbestos has been used widely in products such as fireproofing, thermal insulation, and friction products, including brake linings. EPA invested 10 years in exploring the need for the asbestos ban and in developing the regulation. Based on its review of over 100 studies of the health risks of asbestos as well as public comments on the proposed rule, EPA determined that asbestos is a potential carcinogen at all levels of exposure—that is, that it had no known safe exposure level. EPA’s 1989 rule under TSCA section 6 prohibited the future manufacture, importation, processing, and distribution of asbestos in almost all products. In response, some manufacturers of asbestos products filed suit against EPA arguing, in part, that the rule was not promulgated on the basis of substantial evidence regarding unreasonable risk. In October 1991, the U.S. Court of Appeals for the Fifth Circuit agreed with the chemical companies, concluding that EPA had failed to muster substantial evidence to justify its asbestos ban and returning parts of the rule to EPA for reconsideration. Specifically, the court concluded that EPA did not present sufficient evidence to justify the ban on asbestos because it did not consider all necessary evidence and failed to show that the control action it chose was the least burdensome regulation required to adequately protect human health or the environment. EPA had not calculated the risk levels for intermediate levels of regulation because it believed there was no asbestos exposure level for which the risk of injury or death was zero. As articulated by the court, the proper course of action for EPA, after an initial showing of product danger, would have been to consider each regulatory option listed in TSCA, beginning with the least burdensome, and the costs and benefits of each option. The court further criticized EPA’s ban of products for which no substitutes were currently available stating that, in such cases, EPA “bears a tough burden” to demonstrate, as TSCA requires, that a ban is the least burdensome alternative. In addition, the court stated that in evaluating what risks are unreasonable, EPA must consider the costs of any proposed actions; moreover, the court noted that TSCA’s requirement that EPA impose the least burdensome regulation reinforces the view that EPA must balance the costs of its regulations against their benefits. After completing the 1989 asbestos rule, EPA has completed only one regulation to ban or limit the production or use of an existing chemical (for hexavalent chromium in 1990). Further, EPA has not completed any actions to ban or limit toxic chemicals under section 6 since the court rejected its asbestos rule in 1991. With EPA’s limited actions to control toxic chemicals under TSCA, state and federal actions have established controls for some toxic chemicals. For example, a California statute enacted in 2007 prohibits the manufacture, sale, or distribution of certain toys and child care articles after January 1, 2009, if the products contain concentrations of phthalates exceeding 0.1 percent. In 2008, Congress took similar action. California has also enacted limits on formaldehyde in pressed wood. In response to a petition asking EPA to use section 6 of TSCA to adopt the California formaldehyde regulation, EPA recently issued an advance notice of proposed rulemaking suggesting several regulatory options the agency could pursue under its TSCA section 6 authority to limit exposure to formaldehyde. However, because of the legal hurdles the agency would face in regulating formaldehyde under TSCA, some stakeholders have recommended that EPA pursue legislation to control formaldehyde. In our previous reports on TSCA, we identified a number of options that could strengthen EPA’s ability to regulate harmful chemicals under TSCA and enhance EPA’s ability to protect public health and the environment. Potential changes to TSCA include reducing the evidentiary burden that EPA must meet to take regulatory action under the act by amending the (1) unreasonable risk standard that EPA must meet to regulate existing chemicals under section 6 of TSCA, (2) standard for judicial review that currently requires a court to hold a TSCA rule unlawful and set it aside unless it is supported by substantial evidence in the rulemaking record, and (3) requirement that EPA choose the least burdensome regulatory requirement. We have previously recommended that the Congress amend TSCA to reduce the evidentiary burden that EPA must meet. Alternatively, the European Union’s recently enacted chemical control legislation, REACH, represents a regulatory model that differs from the TSCA framework in key ways. For example, REACH is based on the principle that chemical companies have the responsibility to demonstrate that the chemicals they place in the market, distribute, or use do not adversely affect human health or the environment, while TSCA generally requires EPA to demonstrate that chemicals pose risks to human health or the environment prior to controlling risks related to their production, distribution, or use. In addition, under REACH, chemical companies must obtain authorization to continue to use a chemical of very high concern, such as a chemical for which there is scientific evidence of probable serious health or environmental effects. Generally, to obtain such authorization, the chemical company needs to demonstrate that it can adequately control risks posed by the chemical, such as by requiring that workers wear safety equipment when working with the chemical or otherwise ensuring that the chemical is produced under safe conditions. If the chemical company cannot provide evidence of adequate control, authorization would be granted only if the socioeconomic advantages of a specific use of the chemical are greater than its potential risks, and if there are no suitable alternatives or technologies. This process substantially differs from TSCA’s section 6 requirements as discussed above. EPA’s Ability to Share Information Under TSCA’s Confidential Business Information Provisions Are Limited EPA’s ability to make publicly available the information that it collects under TSCA is limited. Chemical companies may claim some of the information they provide to EPA under TSCA as confidential business information. EPA is required under the act to protect trade secrets and privileged or confidential commercial or financial information against unauthorized disclosures, and this information generally cannot be shared with others, including state health and environmental officials and foreign governments. However, some state officials believe this information would be useful for informing and managing their environmental risk programs. Furthermore, while EPA believes that some claims of confidential business information may be unwarranted, challenging the claims is resource-intensive. EPA has not performed any recent studies of the appropriateness of confidentiality claims, but a 1992 EPA study indicated that problems with inappropriate claims were extensive. This study examined the extent to which companies made confidential business information claims, the validity of the claims, and the impact of inappropriate claims on the usefulness of TSCA data to the public. While EPA may suspect that some chemical companies’ confidentiality claims are unwarranted, the agency does not have data on the number of inappropriate claims. According to EPA, about 95 percent of premanufacture notices contain some information that chemical companies claim as confidential. EPA officials also told us that the agency does not have the resources that would be needed to investigate and, as appropriate, challenge claims to determine the number that are inappropriate. Consequently, EPA focuses on investigating primarily those claims that it believes may be both inappropriate and among the most potentially important—that is, claims relating to health and safety studies performed by the chemical companies involving chemicals currently used in commerce. The EPA official responsible for initiating challenges to confidentiality claims told us that EPA challenges about 14 such claims each year and that the chemical companies withdraw nearly all of the claims challenged. Officials who have various responsibilities for protecting public health and the environment from the dangers posed by chemicals believe that having access to confidential TSCA information would allow them to examine information on chemical properties and processes that they currently do not possess and could enable them to better control the risks of potentially harmful chemicals. Likewise, the general public may also find information provided under TSCA useful. Individual citizens or community groups may have a specific interest in information on the risks of chemicals that are produced or used in nearby facilities. For example, neighborhood organizations can use such information to engage in dialogue with chemical companies about reducing chemical risks, preventing accidents, and limiting chemical exposures. While both TSCA and REACH have provisions to protect information claimed by chemical companies as confidential, REACH requires greater public disclosure of certain information, such as basic chemical properties. Furthermore, REACH places greater restrictions on the kinds of information chemical companies may claim as confidential. For example, REACH includes a provision for public access to basic chemical information, including brief profiles of hazardous properties and authorized uses. The European Union’s approach to public’s access to information combines a variety of ways that the interests of the public’s right to know is balanced with the need to keep certain information confidential. As such, nonconfidential information will be published on the chemical agency’s Web site. REACH also includes a provision under which confidential information can generally be shared with government authorities of other countries or international organizations under an agreement between the parties provided that certain conditions are met. In previous reports, we recommended that the Congress consider providing EPA additional authorities under TSCA to improve its ability to make more chemical information publicly available. For example, in our June 2005 report, we recommended that the Congress consider amending TSCA to authorize EPA to share with the states and foreign governments the confidential business information that chemical companies provide to EPA, subject to regulations to be established by EPA in consultation with the chemical industry and other interested parties that would set forth the procedures to be followed by all recipients of the information in order to protect the information from unauthorized disclosures. In our September 1994 report, we recommended that the Congress consider limiting the length of time for which information may be claimed as confidential without resubstantiation of the need for confidentiality. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. Contacts and Acknowledgments For further information about this testimony, please contact John Stephenson at (202) 512-3841 or stephensonj@gao.gov. Key contributors to this testimony were David Bennett, Antoinette Capaccio, Nancy Crothers, Christine Fishkin, Richard Johnson, and Ed Kratzer. Related GAO Products High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. Chemical Regulation: Comparison of U.S. and Recently Enacted European Union Approaches to Protect against the Risks of Toxic Chemicals. GAO-07-825. Washington, D.C.: August 17 2007. Chemical Regulation: Actions are Needed to Improve the Effectiveness of EPA’s Chemical Review Program. GAO-06-1032T. Washington, D.C.: August 2, 2006. Chemical Regulation: Options Exist to Improve EPA’s Ability to Assess Health Risks and Manage Its Chemical Review Program. GAO-05-458. Washington, D.C.: June 13, 2005. Toxic Substances Control Act: Legislative Changes Could Make the Act More Effective. GAO/RCED-94-103. Washington, D.C.: September 26, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress passed the Toxic Substances Control Act (TSCA) in 1976, authorizing the Environmental Protection Agency (EPA) to obtain information on the risks of industrial chemicals and to control those that EPA determines pose an unreasonable risk. However, EPA does not have sufficient chemical assessment information to determine whether it should establish controls to limit public exposure to many chemicals that may pose substantial health risks. In reports on TSCA, GAO has recommended statutory changes to, among other things, provide EPA with additional authorities to obtain health and safety information from the chemical industry and to shift more of the burden to chemical companies for demonstrating the safety of their chemicals. The most important recommendations aimed at providing EPA with the information needed to support its assessments of industrial chemicals have not been implemented--a key factor leading GAO in January 2009 to add transforming EPA's process for assessing and controlling toxic chemicals to its list of high-risk areas warranting attention by Congress and the executive branch. This testimony, which is based on prior GAO work, addresses EPA's implementation of TSCA and options for (1) obtaining information on the risks posed by chemicals to human health and the environment, (2) controlling these risks, and (3) publicly disclosing information provided by chemical companies under TSCA. TSCA generally places the burden of obtaining data on existing chemicals on EPA, rather than on the companies that produce the chemicals. For example, the act requires EPA to demonstrate certain health or environmental risks before it can require companies to further test their chemicals. As a result, EPA does not routinely assess the risks of the roughly 80,000 industrial chemicals in use. Moreover, TSCA does not require chemical companies to test the approximately 700 new chemicals introduced into commerce annually for their toxicity, and companies generally do not voluntarily perform such testing. Further, the procedures EPA must follow in obtaining test data from companies can take years to complete. In contrast, the European Union's chemical control legislation generally places the burden on companies to provide health effects data on the chemicals they produce. Giving EPA more authority to obtain data from the companies producing chemicals, as GAO has in the past recommended that Congress consider, remains a viable option for improving the effectiveness of TSCA. While TSCA authorizes EPA to issue regulations that may, among other things, ban existing toxic chemicals or place limits on their production or use, the statutory requirements EPA must meet present a legal threshold that has proven difficult for EPA and discourages the agency from using these authorities. For example, EPA must demonstrate "unreasonable risk," which EPA believes requires it to conduct extensive cost-benefit analyses to ban or limit chemical production. Since 1976, EPA has issued regulations to control only five existing chemicals determined to present an unreasonable risk. Further, its 1989 regulation phasing out most uses of asbestos was vacated by a federal appeals court in 1991 because it was not based on "substantial evidence." In contrast, the European Union and a number of other countries have largely banned asbestos, a known human carcinogen that can cause lung cancer and other diseases. GAO has previously recommended that Congress amend TSCA to reduce the evidentiary burden EPA must meet to control toxic substances and continues to believe such change warrants consideration. EPA has a limited ability to provide the public with information on chemical production and risk because of TSCA's prohibitions on the disclosure of confidential business information. About 95 percent of the notices companies have provided to EPA on new chemicals contain some information claimed as confidential. Evaluating the appropriateness of confidentiality claims is time- and resource-intensive, and EPA does not challenge most claims. State environmental agencies and others have said that information claimed as confidential would help them in such activities as developing contingency plans to alert emergency response personnel to the presence of highly toxic substances at manufacturing facilities. The European Union's chemical control legislation generally provides greater public access to the chemical information it receives, and GAO has previously recommended that Congress consider providing EPA additional authorities to make more chemical information publicly available.
Background FAA is responsible for ensuring safe, orderly, and efficient air travel in the national airspace system. NWS supports FAA by providing aviation-related forecasts and warnings at air traffic facilities across the country. Among other support and services, NWS provides four meteorologists at each of FAA’s 21 en route centers to provide on-site aviation weather services. This arrangement is defined and funded under an interagency agreement. FAA’s Mission and Organizational Structure FAA’s primary mission is to ensure safe, orderly, and efficient air travel in the national airspace system. FAA reported that, in 2007, air traffic in the national airspace system exceeded 46 million flights and 776 million passengers. In addition, at any one time, as many as 7,000 aircraft—both civilian and military—could be aloft over the United States. In 2004, FAA’s Air Traffic Organization was formed to, among other responsibilities, improve the provision of air traffic services. More than 33,000 employees within FAA’s Air Traffic Organization support the operations that help move aircraft through the national airspace system. The agency’s ability to fulfill its mission depends on the adequacy and reliability of its air traffic control systems, as well as weather forecasts made available by NWS and automated systems. These resources reside at, or are associated with, several types of facilities: air traffic control towers, terminal radar approach control facilities, air route traffic control centers (en route centers), and the Air Traffic Control System Command Center. The number and functions of these facilities are as follows: 517 air traffic control towers manage and control the airspace within about 5 miles of an airport. They control departures and landings, as well as ground operations on airport taxiways and runways. 170 terminal radar approach control facilities provide air traffic control services for airspace within approximately 40 miles of an airport and generally up to 10,000 feet above the airport, where en route centers’ control begins. Terminal controllers establish and maintain the sequence and separation of aircraft. 21 en route centers control planes over the United States—in transit and during approaches to some airports. Each center handles a different region of airspace. En route centers operate the computer suite that processes radar surveillance and flight planning data, reformats it for presentation purposes, and sends it to display equipment that is used by controllers to track aircraft. The centers control the switching of voice communications between aircraft and the center, as well as between the center and other air traffic control facilities. Three of these en route centers also control air traffic over the oceans. The Air Traffic Control System Command Center manages the flow of air traffic within the United States. This facility regulates air traffic when weather, equipment, runway closures, or other conditions place stress on the national airspace system. In these instances, traffic management specialists at the command center take action to modify traffic demands in order to keep traffic within system capacity. See figure 1 for a visual summary of the facilities that control and manage air traffic over the United States. NWS’s Mission and Organizational Structure The mission of NWS—an agency within the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA)—is to provide weather, water, and climate forecasts and warnings for the United States, its territories, and its adjacent waters and oceans to protect life and property and to enhance the national economy. In addition, NWS is the official source of aviation- and marine-related weather forecasts and warnings, as well as warnings about life-threatening weather situations. The coordinated activities of weather facilities throughout the United States allow NWS to deliver a broad spectrum of climate, weather, water, and space weather services in support of its mission. These facilities include 122 weather forecast offices located across the country that provide a wide variety of weather, water, and climate services for their local county warning areas, including advisories, warnings, and forecasts; 9 national prediction centers that provide nationwide computer modeling to all NWS field offices; and 21 center weather service units that are located at FAA en route centers across the nation and provide meteorological support to air traffic controllers. NWS Provides Aviation Weather Services to FAA As an official source of aviation weather forecasts and warnings, several NWS facilities provide aviation weather products and services to FAA and the aviation sector. These facilities include the Aviation Weather Center, weather forecast offices located across the country, and 21 center weather service units located at FAA en route centers across the country. Aviation Weather Center The Aviation Weather Center located in Kansas City, Missouri, issues warnings, forecasts, and analyses of hazardous weather for aviation. Staffed by 65 personnel, the center develops warnings of hazardous weather for aircraft in flight and forecasts of weather conditions for the next 2 days that could affect both domestic and international aviation. The center also produces a Collaborative Convective Forecast Product, a graphical representation of convective occurrence at 2-, 4-, and 6-hours. This is used by FAA to manage aviation traffic flow across the country. The Aviation Weather Center’s key products are described in table 1. NWS’s 122 weather forecast offices issue terminal area forecasts for approximately 625 locations every 6 hours or when conditions change, consisting of the expected weather conditions significant to a given airport or terminal area and are primarily used by commercial and general aviation pilots. Center Weather Service Units NWS’s center weather service units are located at each of FAA’s 21 en route centers and operate 16 hours a day, 7 days a week (see fig. 2). Each center weather service unit usually consists of three meteorologists and a meteorologist-in-charge who provide strategic advice and aviation weather forecasts to FAA traffic management personnel. Governed by an interagency agreement, FAA currently reimburses NWS approximately $12 million annually for this support. Center Weather Service Units: An Overview of Systems and Operations The meteorologists at the center weather service units use a variety of systems to gather and analyze information compiled from NWS and FAA weather sensors. Key systems used to compile weather information include FAA’s Weather and Radar Processor, FAA’s Integrated Terminal Weather System, FAA’s Corridor Integrated Weather System, and a remote display of NWS’s Advanced Weather Interactive Processing System. Meteorologists at several center weather service units also use NWS’s National Center Advanced Weather Interactive Processing System. Table 2 provides a description of selected systems. NWS meteorologists at the en route centers provide several products and services to the FAA staff, including meteorological impact statements, center weather advisories, periodic briefings, and on-demand consultations. These products and services are described in table 3. In addition, center weather service unit meteorologists receive and disseminate pilot reports, provide input every 2 hours to the Aviation Weather Center’s creation of the Collaborative Convective Forecast Product, train FAA personnel on how to interpret weather information, and provide weather briefings to nearby terminal radar approach control facilities and air traffic control towers. FAA Seeks to Improve Aviation Weather Services Provided at En Route Centers In recent years, FAA has undertaken multiple initiatives to assess and improve the performance of the center weather service units. Studies conducted in 2003 and 2006 highlighted concerns with the lack of standardization of products and services at NWS’s center weather service units. To address these concerns, the agency sponsored studies that determined that weather data could be provided remotely using current technologies, and that private sector vendors could provide these services. In 2005, the agency requested that NWS restructure its aviation weather services by consolidating its center weather service units to a smaller number of sites, reducing personnel costs, and providing products and services 24 hours a day, 7 days a week. NWS subsequently submitted a proposal for restructuring its services, but FAA declined the proposal citing the need to refine its requirements. In December 2007, FAA issued revised requirements and asked NWS to respond with proposals defining the technical and cost implications of three operational concepts. The three concepts involved (1) on-site services provided within the existing configuration of offices located at the 21 en route centers, (2) remote services provided by a reduced number of regional facilities, and (3) remote services provided by a single centralized facility. NWS responded with three proposals, but FAA rejected these proposals in September 2008, noting that while elements of each proposal had merit, the proposed costs were too high. FAA requested that NWS revise its proposal to bring costs down while stating a preference to move towards a single center weather service unit with a back-up site. As a separate initiative, NWS initiated an improvement program for the center weather service units in April 2008. The goal of the program was to improve the consistency of the units’ products and services. This program involved standardizing the technology, collaboration, and training for all 21 center weather service units and conducting site visits to evaluate each unit. NWS reported that it has completed its efforts to standardize the service units and plans to complete its site visits by September 2009. Table 4 provides a chronology of the agencies’ assessment and improvement efforts. Prior GAO Report Identified Concerns with Center Weather Service Units; Recommended Steps to Improve Quality Assurance In January 2008, we reported on concerns about inconsistencies in products and quality among center weather service units. We noted that while both NWS and FAA have responsibilities for assuring and controlling the quality of aviation weather observations, neither agency monitored the accuracy and quality of the aviation weather products provided at center weather service units. We recommended that NWS and FAA develop performance measures and metrics for the products and services to be provided by center weather service units, perform annual evaluations of aviation weather services provided at en route centers, and provide feedback to the center weather service units. The Department of Commerce agreed with our recommendations, and the Department of Transportation stated that FAA planned to revise its requirements and that these would establish performance measures and evaluation procedures. Proposal to Consolidate Center Weather Service Units Is Under Consideration NWS and FAA are considering plans to restructure the way aviation weather services are provided at en route centers. After a 6-month delay, NWS sent FAA its latest proposal for restructuring the center weather service units in June 2009. NWS’s proposal involves consolidating 20 of the 21 existing center weather service units into 2 locations, with one at the Aviation Weather Center in Kansas City, Missouri and the other at a new National Centers for Environmental Prediction office planned for the DC metropolitan area of Maryland. The Missouri center is expected to handle the southern half of the United States while the Maryland center is expected to handle the northern half of the United States. NWS plans for the two new units to be staffed 24 hours a day, 7 days a week, and to function as backup sites for each other. These new units would continue to use existing forecasting systems and tools to develop products and services. See figure 3 for a visual summary of the proposed consolidated center weather service unit facilities that control and manage air traffic over the United States. While these new units would continue to use existing forecasting systems and tools to develop products and services, NWS has also proposed new products, services, and tools. Two new products are the Collaborative Weather Impact Product and the terminal radar approach control forecast. The former is expected to expand the Aviation Weather Center’s existing Collaborative Convective Forecast Product to include convection, turbulence, icing, wind, ceiling/visibility, and precipitation type/intensity. The latter is expected to extract data from the Collaborative Weather Impact Product and include precipitation, winds, and convection for the terminal area; the display will allow the forecaster to layer this information on air traffic management information such as jet routes. In addition, NW S plans to create a web portal to allow FAA and other users to access its advisories, forecasts, and products as well as national, regional, and lo weather briefings. To support on-demand briefings at the new center weather service units, NWS plans to use collaboration instant messaging and online collaboration software. Given the reduced number of locations in the revised organizational structure, NWS also proposed reducing the number of personnel need support its operations from 84 to 50 full time staff—a reduction of 34 positions. Specifically, the agency determined that it will require 20 staff members for each of the new center weather service units; 4 staff members at the Alaska unit; 5 additional forecasters at the Aviation Weather Center to help prepare the Collaborative Weather Impact Product; and a quality assurance manager at NWS headquarters. NWS anticipates the staff reductions will be achieved through scheduled retirements, resignations, and reassignments. However, the agency has identified the transition of its existing workforce to the new centers as a high-impact risk because staff may decline to move to the new locations. NWS also proposed tentative time frames for transitioning to the new organizational structure over a 3-year period. During the first year after FAA accepts the proposal, NWS plans to develop a transition plan and conduct a 9-month demonstration of the concept in order to ensure that the new structure will not degrade its services. Agency officials estim that initial operating capability would be achieved by the end of the second year after FAA approval and full operating capability by the end of the third year. NWS estimated the transition costs for this proposal at approxim $12.8 million, which includes approximately $3.3 million for the demonstration. In addition, NWS estimated that the annual recurring cos will be about 21 percent lower than current annual costs. For example, using 2009 prices, NWS estimated that the new structure would cost $9.7 million—about $2.6 million less than the current $12.3 million cost. See table 5 for the estimated costs for transitioning the centers. NWS and FAA Are Working to Establish a Baseline of Current Performance, but Are Not Assessing Key Measures According to best practices in leading organizations, performance should be measured in order to evaluate the success or failure of programs. Performance measurement involves identifying performance goals and measures, establishing performance baselines, identifying targets for improving performance, and measuring progress against those targets. Having a clear understanding of an organization’s current performance—a baseline—is essential to determining whether new initiatives (like the proposed restructuring) result in improved or degraded products and services. In January 2008, we reported that NWS and FAA lacked performance measures and a baseline of current performance for the center weather service units and recommended that they develop performance measures. In response to this recommendation, FAA established five performance standards for the center weather service units. FAA also recommended that NWS identify additional performance measures in its proposal for restructuring the center weather service units. While NWS subsequently identified eight additional performance measures in its proposal, FAA has not yet approved these measures. All 13 performance measures are listed in table 6. NWS officials reported that they have historical data for one of the 13 performance measures—participation in the Collaborative Convective Forecast Product—and are working to obtain a baseline for three other performance measures. Specifically, in January 2009, NWS and FAA began evaluating how the center weather service units are performing and, as part of this initiative, are collecting data associated with organizational service provision, format consistency, and briefing service provision. As of June 2009, the agencies had completed evaluations of 13 service units and plan to complete evaluations for all 21 service units by September 2009. However, the agencies have not established a baseline of performance for the 9 other performance measures. NWS officials reported that they are not collecting baseline information for a variety of reasons, including that the measures have not yet been approved by FAA, and that selected measures involve products that have not yet been developed. A summary of the status of efforts to establish baselines and reasons for not establishing baselines is provided in table 7. While 4 of the potential measures are tied to new products or services under the restructuring, the other 5 could be measured using current products and services. For example, accuracy and customer satisfaction are measures that could be tracked for current operations. NWS continually measures the accuracy of a range of weather products— including hurricane and tornado forecasts. Customer satisfaction measures could be determined by surveying the FAA managers who receive the aviation weather products. It is important to obtain an understanding of the current level of performance in these measures before beginning any efforts to restructure aviation weather services. Without an understanding of the current level of performance, NWS and FAA will not be able to measure the success or failure of any changes they make to the center weather service unit operations. As a result, any changes to the current structure could degrade aviation operations and safety—and the agencies may not know it. NWS and FAA Face Challenges in Efforts to Modify the Current Aviation Weather Structure NWS and FAA face challenges in their efforts to modify the current aviation weather structure. These include challenges associated with (1) interagency collaboration, (2) defining requirements, and (3) aligning any changes with the Next Generation Air Transportation System (NextGen)— a long-term initiative to increase the efficiency of the national airspace system. Specifically, the two agencies have had difficulties in interagency collaboration and requirements development leading to an inability to reach agreement on a way forward. In addition, the restructuring proposals have not been aligned with the national strategic vision for the future air transportation system. Looking forward, if a proposal is accepted, the agencies could face three additional challenges in implementing the proposal, including (1) developing a feasible schedule that includes adequate time for stakeholder involvement, (2) undertaking a comprehensive demonstration to ensure no services are degraded, and (3) effectively reconfiguring the infrastructure and technologies to the new structure. Unless and until these challenges are addressed, the proposed restructuring of aviation weather services at en route centers has a reduced chance of success. Interagency Collaboration To date, FAA and NWS have encountered challenges in interagency collaboration. We have previously reported on key practices that can help enhance and sustain interagency collaboration. The practices generally consist of two or more agencies defining a common outcome, establishing joint strategies to achieve the outcome, agreeing upon agency roles and responsibilities, establishing compatible policies and procedures to operate across agency boundaries, and developing mechanisms to monitor, evaluate, and report the results of collaborative efforts. While NWS and FAA have established policies and procedures for operating across agencies through an interagency agreement and have initiated efforts to establish a baseline of performance for selected measures through their ongoing site evaluations, the agencies have not defined a common outcome, established joint strategies to achieve the outcome, or agreed upon agency responsibilities. Instead, the agencies have demonstrated an inability to work together to resolve issues and to accomplish meaningful change. Specifically, since 2005, FAA has requested that NWS restructure its aviation weather services three times, and then rejected NWS’s proposals twice. Further, after requesting extensions twice, NWS provided its proposal to FAA in June 2009. As a result, it is now almost 4 years since FAA first initiated efforts to improve NWS aviation weather services, and the agencies have not yet agreed on what needs to be changed and how it will be changed. Table 8 lists key events. Until the agencies agree on a common outcome, establish joint strategies to achieve the outcome, and agree on respective agency responsibilities, they are unlikely to move forward in efforts to restructure weather services. Without sound interagency collaboration, both FAA and NWS will continue to spend time and resources proposing and rejecting options rather than implementing solutions. Defining Requirements The two agencies’ difficulties in determining how to proceed with their restructuring plans are due in part to a lack of stability in FAA’s requirements for center weather service units. According to best practices of leading organizations, requirements describe the functionality needed to meet user needs and perform as intended in the operational environment. A disciplined process for developing and managing requirements can help reduce the risks associated with developing or acquiring a system or product. FAA released its revised requirements in December 2007 and NWS subsequently provided proposals to meet these requirements. However, FAA rejected all three of NWS’s proposals in September 2008 on the basis that the costs of the proposals were too high, even though cost was not specified in FAA’s requirements. NWS’s latest proposal is based on FAA’s December 2007 requirements as well as detailed discussions held between the two agencies in October 2008. However, FAA has not revised its requirements to reflect the guidance it provided to NWS in those discussions, including reported guidance on handling the Alaska center and moving to the two-center approach. Without formal requirements developed prior to the development of the new products and services, FAA runs the risk of procuring products and services that do not fully meet their users’ needs or perform as intended. In addition, NWS risks continued investments in trying to create a product for FAA without clear information on what the agency wants. Alignment with the Next Generation Air Transportation System Neither FAA nor NWS have ensured that the restructuring of the center weather service units fits with the national vision for a Next Generation Air Transportation System (NextGen) —a long-term initiative to transition FAA from the current radar-based system to an aircraft-centered, satellite- based system. Our prior work on enterprise architectures shows that connecting strategic planning with program and system solutions can increase the chances that an organization’s operational and information technology (IT) environments will be configured to optimize mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of a larger, strategic vision often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. The Joint Planning and Development Office is responsible for planning and coordinating NextGen. As part of this program, the Joint Planning and Development Office envisions restructuring air traffic facilities, including en route centers, across the country as well as a transitioning to new technologies. However, NWS and FAA efforts to restructure the center weather service units have not been aligned with the Joint Planning and Development Office’s vision for transforming air traffic control under the NextGen program. Specifically, the chair of NextGen’s weather group stated that Joint Planning and Development Office officials have not evaluated NWS and FAA’s plans for restructuring the center weather service units, nor have they been asked to do so. Other groups within FAA are responsible for aligning the agency’s enterprise architecture with the NextGen vision through annual roadmaps that define near-term initiatives. However, recent roadmaps for aviation weather do not include any discussion of plans to restructure the center weather service units or the potential impact that such a change could have on aviation weather systems. Additionally, in its proposal, NWS stated that it followed FAA’s guidance to avoid tightly linking the transition schedule to NextGen’s expected initial operating capability in 2013, but recommended doing so since the specific role of the center weather service units in NextGen operations is unknown. Until the agencies ensure that changes to the center weather service units fit within the strategic-level and implementation plans for NextGen, any changes to the current structure could result in wasted efforts and resources. Schedule Development Looking forward, if a proposal is accepted, both agencies could also face challenges in developing a feasible schedule that includes adequate time for stakeholder involvement. NWS estimated a 3-year transition time frame from current operations to the two-center approach. FAA officials commented that they would like to have the two-center approach in place by 2012. However, NWS may have difficulty in meeting the transition timeframes because activities that need to be conducted serially are planned concurrently within the 3-year schedule. For example, NWS may need to negotiate with its union before implementing changes that affect working conditions—such as moving operations from an en route center to a remote location. NWS officials acknowledge the risk that these negotiations can be prolonged and sometimes take years to complete. If the proposal is accepted, it will be important for NWS to identify activities that must be conducted before others in order to build a feasible schedule. Demonstrating No Degradation of Service If a proposal is accepted, both agencies could face challenges in demonstrating that existing services will not be degraded during the restructuring. In its proposal, NWS identified preliminary plans to demonstrate the new operational concept before implementing it in order to ensure that there is no degradation of service. Key steps included establishing a detailed demonstration plan, conducting risk mitigation activities, and implementing a demonstration that is to last at least 9 months. NWS also proposed that the demonstration will include an independent evaluation by a team of government and industry both before the demonstration, to determine if the demonstration is adequate to validate the new concept of operations, and after, to determine the success of the demonstration. In addition, throughout the 9-month demonstration, NWS plans to have the independent team periodically provide feedback, recommendations, and corrective actions. However, as noted earlier, NWS has not yet defined all of the performance measures it will use to determine whether the prototype is successful. In its proposal, NWS stated that the agencies will begin to document performance metrics and develop and refine evaluation criteria during the demonstration. If NWS waits to define evaluation criteria during the evaluation, it may not have baseline metrics needed to compare to the demonstration results. Without baseline metrics, NWS may be unable to determine whether the demonstration has degraded service or not. Technology Transition Both agencies could face challenges in effectively transitioning the infrastructure and technologies to the new consolidated structure, if a proposal is accepted. In its proposal, NWS planned to move its operations from 20 en route centers to two sites within 3 years. However, to do so, the agencies will need to modify their aviation weather systems and develop a communications infrastructure. Specifically, NWS and FAA will need to modify or acquire systems to allow both current and new products for an expanded view of the country. Additionally, NWS will need to develop continuous two-way communications in lieu of having staff onsite at each en route center. NWS has recognized the infrastructure as a challenge, and plans to mitigate the risk through continuous dialogue with FAA. However, if interagency collaboration does not improve, attempting to coordinate the systems and technology of the two agencies may prove difficult and further delay the schedule. Implementation of Draft Recommendations Should Improve Interagency Approach to Aviation Weather In our draft report, we are making recommendations to the Secretaries of Commerce and Transportation to improve the aviation weather products and services provided at FAA’s en route centers. Specifically, we are recommending that the Secretaries direct the NWS and FAA administrators, respectively, to improve their ability to measure improvements in the center weather service units by establishing and approving a set of performance measures for the center weather service units, and by immediately identifying the current level of performance for the five potential measures that could be identified under current operations (forecast accuracy, customer satisfaction, service delivery conformity, timeliness of on-demand services, and training completion) so that there will be a baseline from which to measure the impact of any proposed operational changes. In addition, we are recommending that the Secretaries direct the NWS and FAA administrators to address specific challenges by improving interagency collaboration by defining a common outcome, establishing joint strategies to achieve the outcome, and agreeing upon each agency’s responsibilities; establishing and finalizing requirements for aviation weather services at en ensuring that any proposed organizational changes are aligned with NextGen initiatives by seeking a review by the Joint Program Development Office responsible for developing the NextGen vision; and before moving forward with any proposed operational changes, address implementation challenges by developing a feasible schedule that includes adequate time for stakeholder involvement; undertaking a comprehensive demonstration to ensure no services are degraded; and effectively transitioning the infrastructure and technologies to the new consolidated structure. In summary, for several years, FAA and NWS have explored ways to improve the operations of the center weather service units by consolidating operations and providing remote services. Meanwhile, the two agencies have to make a decision on the interagency agreement, which will expire at the end of September 2009. If FAA and NWS are to create a new interagency agreement that incorporates key dates within the proposal, decisions on the proposal will have to be made quickly. An important component of any effort to improve operations is a solid understanding of current performance. However, FAA and NWS are not working to identify the current level of performance in five measures that are applicable to current operations. Until the agencies have an understanding of the current level of performance, they will not be able to measure the success or failure of any changes to the center weather service unit operations. As a result, any changes to the current structure could degrade aviation operations and safety—and the agencies may not know it. If the agencies move forward with plans to restructure aviation weather services, they face significant challenges including a poor record of interagency collaboration, undocumented requirements, and a lack of assurance that this plan fits in the broader vision of the Next Generation Air Transportation System. Moreover, efforts to implement the restructuring will require a feasible schedule, a comprehensive demonstration, and a solid plan for technology transition. Until these challenges are addressed, the proposed restructuring of aviation weather services at en route centers has little chance of success. Mr. Chairman and members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other key contributors to this testimony include Colleen Phillips, Assistant Director; Gerard Aflague; Kate Agatone; Neil Doherty; Rebecca Eyler; and Jessica Waselkow. Attachment 1: Scope and Methodology For the draft report on which this testimony is based, we determined the status of NWS’s plans for restructuring the center weather service units by reviewing the existing interagency agreement, FAA’s proposed requirements, and NWS’s draft and final proposals for addressing FAA’s requirements. We analyzed NWS’s draft transition schedules, cost proposals, and evaluation plans. We also interviewed NWS and FAA officials to obtain clarifications on these plans. To evaluate the agencies’ efforts to establish a baseline of the current performance provided by center weather service units, we reviewed documentation including FAA’s performance standards, the current interagency agreement, NWS’s restructuring proposals and Quality Assurance Surveillance Plan, and the agencies’ plans for evaluating the centers. We compared the agencies’ plans for creating a baseline of current performance with best practices for performance management by the Department of the Navy and General Services Administration. We also interviewed NWS and FAA officials involved in establishing a baseline of current performance provided by center weather service units. To evaluate challenges to restructuring the center weather service units, we reviewed agency documentation, including FAA’s requirements document and NWS’s proposals to restructure the center weather service units. We also reviewed planning documents for the Next Generation Air Transportation System. We compared these documents with best practices for system development and requirements management from the Capability Maturity Model® Integration for Development; and with GAO’s best practices in interagency collaboration and architecture planning. In addition, we interviewed NWS, FAA, and Joint Planning and Development Office officials regarding challenges to restructuring the center weather service units. We performed our work at FAA and NWS headquarters offices, and FAA’s Air Traffic Control System Command Center in the Washington, D.C., metropolitan area. We conducted this performance audit from August 2008 to July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Weather Service's (NWS) weather products are a vital component of the Federal Aviation Administration's (FAA) air traffic control system. In addition to providing aviation weather products developed at its own facilities, NWS also provides staff onsite at each of FAA's en route centers--the facilities that control high-altitude flight outside the airport tower and terminal areas. Over the last few years, FAA and NWS have been exploring options for enhancing the efficiency of the aviation weather services provided at en route centers. GAO was asked to summarize its draft report that (1) determines the status and plans of efforts to restructure the center weather service units, (2) evaluates efforts to establish a baseline of the current performance provided by these units, and (3) evaluates challenges to restructuring them. NWS and FAA are considering plans to restructure the way aviation weather services are provided at en route centers, but it is not yet clear whether and how these changes will be implemented. In 2005, FAA requested that NWS restructure its services by consolidating operations to a smaller number of sites, reducing personnel costs, and providing services 24 hours a day, seven days a week. NWS developed two successive proposals, both of which were rejected by FAA--most recently because the costs were too high. FAA subsequently requested that NWS develop another proposal by late December 2008. In response, NWS developed a third proposal that involves consolidating 20 of 21 existing center weather service units into 2 locations. NWS sent this proposal to FAA in early June 2009. FAA officials stated that they plan to respond to NWS's proposal in early August 2009. In response to GAO's prior concerns that NWS and FAA lacked performance measures and a baseline of current performance, the agencies have agreed on five measures and NWS has proposed eight others. In addition, the agencies initiated efforts to establish a performance baseline for 4 of 13 potential performance measures. However, the agencies have not established baseline performance for the other 9 measures. NWS officials stated that they are not collecting baseline information on the 9 measures for a variety of reasons, including that some of the measures have not yet been approved by FAA, and that selected measures involve products that have not yet been developed. While 4 of the 9 measures are tied to new products or services that are to be developed if NWS's latest restructuring proposal is accepted, the other 5 could be measured in the current operational environment. For example, both forecast accuracy and customer satisfaction measures are applicable to current operations. It is important to obtain an understanding of the current level of performance in these measures before beginning any efforts to restructure aviation weather services. Without an understanding of the current level of performance, NWS and FAA may not be able to measure the success of any changes they make to the center weather service unit operations. As a result, any changes to the current structure could degrade aviation operations and safety--and the agencies may not know it. NWS and FAA face challenges in their efforts to improve the current aviation weather structure. These include challenges associated with (1) interagency collaboration, (2) defining FAA's requirements, and (3) aligning any changes with the Next Generation Air Transportation System (NextGen)--a long-term initiative to increase the efficiency of the national airspace system. If the restructuring proposal is accepted, the agencies face three additional challenges in implementing it: (1) developing a feasible schedule that includes adequate time for stakeholder involvement, (2) undertaking a comprehensive demonstration to ensure no services are degraded, and (3) effectively reconfiguring the infrastructure and technologies to the new structure. Unless and until these challenges are addressed, the proposed restructuring of aviation weather services at en route centers has a reducedchance of success.
Background Encouraging Competition and Launch Vehicle Development to Assure Access to Space The U.S. government has sought to help develop a competitive launch industry from which it can acquire launch services in order to lower the price of space launch and assure its access to space. The Air Force’s Evolved Expendable Launch Vehicle (EELV) program is responsible for acquiring intermediate to heavy U.S. national security space launches for DOD and the intelligence community and has recently implemented a competitive strategy for acquiring launch services by allowing certified commercial providers to compete for certain national security launch opportunities. DOD has historically relied on the EELV program’s intermediate and heavy launch vehicles to place its national security satellites into desired orbits. These launch missions have typically taken 2-3 years from initial order to launch date. Additionally, DOD has been striving for over 10 years to develop launch systems that can deliver smaller payloads into orbit more quickly. For example, as we reported in 2015, the Air Force’s Operationally Responsive Space office is one of several efforts underway to develop or demonstrate low-cost responsive launch capabilities for small to medium satellites. The Operationally Responsive Space office uses ICBM motor-based launch vehicles, among others, to deliver quick-response space-based capabilities with small satellites. NASA has also sought to acquire launch services from multiple providers to reduce launch costs and meet the needs of a diverse set of civil objectives. For example, in 2006, NASA started the Commercial Orbital Transportation Services program and provided several commercial companies funding under Space Act agreements to help develop and demonstrate cargo transport capabilities using commercial space transportation systems. NASA now uses commercial resupply services contracts to deliver cargo to the International Space Station. Additionally, in an effort to facilitate private investment into small launch vehicles, in 2015, NASA awarded multiple Venture Class Launch Service contracts to provide access to low-Earth orbit for small satellites. This effort is to demonstrate dedicated launch capabilities for small payloads that NASA anticipates it will require on a regular basis for future missions. The U.S. government has also strived to encourage the growth of the commercial space launch industry by limiting government activities that might interfere with market forces. The 1994 National Space Transportation Policy and Fact Sheet stated that U.S. government agencies shall purchase commercially available U.S. space transportation products and services to the fullest extent feasible and generally prohibited the use of surplus ICBM motors for launch unless certain conditions were met. According to Department of Commerce officials, this policy continued the generally agreed upon practice that has been in place since the early 1990s to prohibit the use of surplus ICBM motors for private purposes. The Commercial Space Act of 1998 prohibited the commercial use of surplus ICBM motors and directed government agencies to purchase and use commercial services with only limited exceptions. Further, the 2010 National Space Policy directs the government to develop government launch systems only when doing so is necessary to assure and sustain reliable and efficient access to space and there is no U.S. commercial system available. The 2010 policy also instructs agencies to refrain from conducting U.S. government space activities that preclude, discourage, or compete with U.S. commercial space activities, unless required by national security or public safety. The 2013 National Space Transportation Policy directs agencies to foster and cultivate innovation and entrepreneurship in the U.S. commercial space transportation sector. Members of the Federal Aviation Administration Commercial Space Transportation Advisory Committee and other industry stakeholders have said that eliminating the prohibition on the commercial use of surplus ICBM motors may harm the commercial launch industry. Department of Transportation officials said that using surplus ICBM motors is not innovative and that these motors are old technology. In response to a House Armed Services Committee report to a bill for the NDAA for Fiscal Year 2017 and an associated House member request for detailed information on the costs and benefits, the Air Force is also studying the potential effects of changing law and policy to allow surplus ICBM motors to be used on commercial launches and expects to complete its study later this year. According to Air Force and DOD officials, if law and policy are changed to allow surplus ICBM motors to be used for commercial space launches, DOD plans to use the results of the Air Force study to make a recommendation as to how sales to commercial providers should be implemented. Surplus Intercontinental Ballistic Missile Motors According to the Air Force, it spends, on average, approximately $17 million each year on its stockpile of about 720 surplus ICBM motors. This amount includes the cost of storing the motors in bunkers, maintaining facilities and equipment, ensuring the motors are aging and being stored safely, and, as the budget allows, destroying and disposing of motors that can no longer be safely used. The majority of the stockpile is made up of surplus Peacekeeper and Minuteman II motors. According to the Air Force, these motors were designed to be used for active ICBM missiles but were placed into storage starting in the mid-1990s to early 2000s when they were no longer needed for nuclear deterrence. Newer systems came online and treaties—such as the Strategic Arms Reduction Treaty of 1991 and the New START of 2010—resulted in Arms control agreements with Russia (formerly the Soviet Union) that initiated the reduction of strategic force assets, including ICBMs, according to Department of State officials. The Air Force’s Rocket Systems Launch Program (RSLP) oversees— at Camp Navajo, Arizona and Hill Air Force Base in Utah— storing, maintaining, testing, and preparing motors to support government launches. The program office is to conduct these activities in accordance with existing nuclear non-proliferation rules, including a requirement to maintain control over the motors at all times. Figure 1 is a notional depiction of the process to transfer a set of motors from a storage bunker to the launch site. A set of Peacekeeper or Minuteman II motors provides lower stage propulsion for ICBM motor-based launch vehicles. Peacekeeper motors typically are grouped in sets of three while Minuteman II motors are grouped in sets of two. To prepare motors for launch, the RSLP office first selects a suitable set of motors based on the type of launch vehicle and the payload capability needs of missions. Officials then use specialized equipment and facilities at Hill Air Force Base to refurbish the motors for flight before transporting them to the launch site. The launch service provider then integrates the higher stages, control section, fairing, and payload onto the motor stack. According to RSLP officials, they also monitor the stockpile of motors for those that can no longer be used for launch because of cracks in the propellant, discoloration and smell, or other issues related to age and condition. The RSLP office destroys non-operational motors, as its budget allows after it has funded all other motor-related activities, either through static fire, by exploding the motors, or by contracting to have the propellant—ammonium perchlorate—removed and the motor casings buried or cut up and sold for scrap. Since 2011, the RSLP office has destroyed 369 Minuteman II motors, but it has not yet destroyed any Peacekeeper motors because they are relatively new motors and have not yet displayed signs of aging. Current ICBM Motor- based Launch Vehicles The main government customers of ICBM motor-based launch vehicles are within DOD—including the Air Force, Defense Advanced Research Projects Agency, Missile Defense Agency, and National Reconnaissance Office. Currently, Orbital ATK is the sole U.S. provider of ICBM motor- based space launch vehicles, of which there are two basic configurations: The Minotaur I, first launched in 2000, uses two surplus Minuteman II motors for its first and second propulsion stages and uses two or three new solid rocket motors for higher-stage propulsion. This vehicle is capable of launching small payloads, such as cube satellites and other technology development efforts that weigh up to about 500 kilograms (kg). Since 2000, there have been 11 space launches using this vehicle. The Minotaur IV, first launched in 2010, uses three Peacekeeper motors for first, second, and third stage propulsion and one to two new solid rocket motors for higher-stage propulsion. This vehicle is capable of launching medium payloads, such as space surveillance and other earth observation satellites that range from about 500kg to 2000kg. Since 2010, there have been 3 space launches using this vehicle. Figure 2 shows the maximum payload weight capabilities and launch costs for these two launch vehicles. Global Launch Vehicles, Commercial Demand, and Launch Prices Well-established and new, start-up satellite companies are expected to drive demand for launch in the low- and medium-weight payload classes over the next 10-15 years, according to the FAA. For example, according to the FAA’s 2017 Annual Compendium on Commercial Space Transportation, the average number of international commercial launches going to non-geosynchronous orbits, including the subset of low Earth orbits expected to be served by new commercial small launch vehicles, will grow from 7 launches per year over the last 10 years to 21 launches per year for the next 10 years. While about 70 percent of those launches will launch on medium to heavy capability launch vehicles, according to the FAA, companies launching new Earth observation constellations plan to use commercial small launch vehicles currently in development to launch about 30 percent of forecasted launches. In 2016 there were 85 global satellite launches, 21 of which were considered commercial and, of these commercial launches, U.S. providers performed 11. Currently, there is only one U.S.-based commercial launch vehicle—Orbital ATK’s Pegasus XL—that can provide dedicated launch for small satellites. Other options for access to space for small payloads include launching as a secondary payload on a larger launch vehicle, aggregating them with other small payloads on a larger launch vehicle, or launching aboard foreign launch vehicles. Existing global launch vehicles and prices are presented in appendix II. In recent years, a number of companies have begun to develop small launch vehicles to support what they expect to be an increase in demand for small, dedicated commercial payload launches. According to Department of Commerce officials, the emergence of new small launch vehicles is mainly the result of investment and technology development in the United States. Entrepreneurs have announced plans to launch multi- satellite broadband constellations to provide Internet access in remote regions of the Earth. These constellations consist of hundreds to thousands of satellites, in some cases weighing between 100 to 200 kilograms each. Although the FAA expects nearly 1,500 commercial remote sensing satellites to drive launch demand through 2026, in most cases they will be launched as secondary payloads. According to the FAA however, this may change as new small launch vehicles become available over the next few years. Several emerging providers began conducting test flights in 2017 and some plan to begin carrying commercial payloads in 2018. These providers plan to launch individual small satellites at a high rate in order to keep prices low. However, launch demand is historically difficult to predict in the non- geosynchronous orbit market, as many of the companies driving this forecasted demand are new and their business plans have yet to be proven. Options for Determining the Selling Prices for Surplus ICBM Motors Several methodologies could be used to determine the sales prices of surplus ICBM motors. The price at which surplus ICBM motors are sold is an important factor for determining the extent of potential benefits and challenges of allowing the motors to be used for commercial launch. The pricing methodology that we used in our analysis was to determine breakeven prices—that is, prices at which DOD could sell the motors and “breakeven” with, or recuperate, the costs it incurs to transfer them, while discounting from that price any potential savings DOD could achieve by avoiding future storage and disposal costs. However, the breakeven price is just one of several methods that DOD could use to set the sales prices of surplus ICBM motors. For example, industry stakeholders, in response to the Air Force’s request for information, provided written input to the Air Force stating that the sales prices should be based on methods for determining the value of an asset in the FASAB Handbook, such as the fair market values or prices of comparable new commercially available motors. Breakeven Price Analysis The breakeven price is the price at which DOD could sell surplus ICBM motors, recuperate the costs to transfer the motors for launch, and account for savings from avoiding storage and disposal costs had the motors not been used for space launch. Below this price, DOD would not recuperate its costs, and, above this price, DOD would potentially achieve savings. We estimated that DOD could sell three Peacekeeper motors or two Minuteman II motors—the numbers required for one launch—at a breakeven price of about $8.36 million or $3.96 million, respectively. Figure 3 shows the estimated price points per motor set at which DOD could lose money, breakeven with its motor transfer costs, or achieve savings; based on our launch demand, storage cost, and disposal plan assumptions. Table 1 provides a more detailed breakdown of how we calculated the breakeven prices. The RSLP office incurs costs each year to store, dispose of, and transfer motors for launch, which it could recuperate if it sold motors at a breakeven price: Motor Transfer: We found that the cost of transferring motors (transfer costs) ranges from approximately $4.1 million to $8.5 million, depending on the motors used and other mission details. According to the RSLP office, transfer costs vary from mission to mission depending on the number and type of motors required, the location of the launch site, and other factors. Transfer costs include ensuring that the motors will be reliable for launch, refurbishment to ensure flightworthiness, transportation between the motor storage facility and the launch site, and other mission costs. According to DOD, it spends approximately $4.5 million per Peacekeeper motor set and $1.7 million per Minuteman II motor set to ensure that the motors will be reliable during flight—referred to as Reliability of Flight. This involves static fire testing to assess how well the motors are aging. Additionally, DOD is responsible for restoring motors to flightworthy status (known as “refurbishment”) and this must be completed on the motors before they are transferred to the launch site. Refurbishment costs per launch are roughly $840,000 for a set of 3 Peacekeeper motors to $460,000 for a set of 2 Minuteman II motors. Transportation costs are approximately $1.3 million per Peacekeeper motor set and approximately $130,000 per Minuteman II motor set. Other mission specific costs that are included in the cost to transfer the motors include those for activities such as mission assurance, program management, and facilities and civilian salaries at Hill Air Force Base. Motor Storage and Disposal: We found DOD spends about $5.8 million to store Minuteman II motors and $2 million to store Peacekeeper motors each year. Motors are stored under environmentally controlled conditions at Hill Air Force Base in Utah and Camp Navajo in Arizona until required for use. It costs DOD approximately $317,000 to dispose of a set of Peacekeeper motors and approximately $105,000 to dispose of a set of Minuteman II motors. The Air Force avoids annual storage and disposal costs over time as the motor stockpile is depleted at about $183,000 per Peacekeeper motor set and about $116,000 per Minuteman II motor set, which is reflected in the breakeven price as savings. The number of expected future launches affects potential storage and disposal cost savings to DOD because if more motors are used—either through launch or disposal, the bunkers in which the motors are stored are emptied more quickly. We estimated that under the current government only scenario, the stockpile of Peacekeeper motors would be depleted in 13-22 years and the Minuteman II motors in 9-10 years. Under a scenario in which motors are also used for commercial launches, Peacekeeper motors may be depleted more quickly—in 11- 13 years. The stockpile of Minuteman II motors is depleted in 9 years under a commercial scenario as well because the number of motors that are disposed of each year is relatively high compared to the number of motors that could be used for government and commercial launches. Our breakeven price calculations rely on assumptions and estimates— based on information we received from the RSLP office—for factors such as future launch demand, transfer costs, disposal costs and schedules, and the specific transfer activities for which the Air Force would be responsible. Differing assumptions would likely cause the breakeven prices to change. For example, if launch demand is much lower or higher than current assumptions, consequent changes to storage and disposal costs would affect savings estimates. Additionally, if some motor transfer activities the Air Force currently conducts, such as transporting the motors to the launch site, are instead performed by the companies that purchase the motors, breakeven prices could decrease. As the Air Force seeks to complete its study and determine a price at which it could sell surplus ICBM motors, the assumptions it makes about costs and future launch demand could significantly change the price points of the motors. Options for Pricing Surplus ICBM Motors Varied across Industry Stakeholders In addition to breakeven pricing methodology, there are additional options available for determining sales prices for surplus ICBM motors. In response to the Air Force’s August 2016 request for information, industry stakeholders proposed that the Air Force use various options for valuing the motors, which resulted in prices ranging from $1.3 million for a set of Peacekeeper motors to $11.2 million for a Peacekeeper first stage motor. These options included basing the price on the comparable selling price of existing equivalent motors and using the price at which current buyers are willing to purchase the motors. Five respondents out of 16 proposed prices for Peacekeeper motors. For example, one respondent suggested $11.2 million for a first stage Peacekeeper motor, stating that Peacekeeper motors should be based on the comparable selling price of existing and comparable new solid rocket motors. This respondent pointed to a provision in the FASAB guidance that states that the fair value of an asset or liability may be measured at the market value in established markets, such as for those of certain investments or debt securities, or it may be estimated when there is no active market. As such, this respondent stated in both its response and during an interview with us that there is an active market for the Castor 120 motor—the commercial equivalent of the Peacekeeper ICBM motor—that can be used to value the Peacekeeper motor. This respondent estimated the current fair market value for a Castor 120 motors at about $11.2 million as of August 2016. One respondent said that the selling price of the motors should reflect some or all of the historical storage and disposal costs incurred from the time the motors were determined to no longer be needed for active nuclear deterrence until present day. One respondent suggested that DOD should sell a set of three Peacekeeper motors for about $1.3 million. According to this respondent, the motors are not an asset that should be sold at a high price, but rather a liability that requires funds to support storage, security, safe handling, and ultimately require an expensive disposal process. This respondent stated that fair value of the motors is the price at which current buyers are willing to purchase the motors. However, officials at the Department of Commerce with expertise on international trade and the commercial space launch industry stated that selling a motor to a purchaser for less than a price that covers the motor’s costs and expenses could result in a subsidy to the purchaser, which may violate U.S. international trade obligations. Potential Benefits and Challenges of Selling Surplus ICBM Motors for Commercial Launch Based on our analysis of the potential effects of selling surplus ICBM motors to launch providers for commercial launches, there are both benefits and challenges. Allowing U.S. companies to use surplus ICBM motors for commercial space launch could benefit the U.S. commercial launch sector by increasing the global competitiveness of U.S. launch services, providing U.S.-based launch customers more launch options and greater flexibility, and increasing demand for newly-manufactured solid rocket motors. DOD may also benefit through increased opportunities to utilize its personnel with specialized motor handling skills and offset some of the fixed costs to maintain the safety of the motors if commercial launches drive an overall increase in launch demand. However, the commercial use of surplus ICBM motors could create challenges, such as discouraging private investment and disrupting competition among emerging commercial space launch companies. Moreover, DOD may encounter challenges balancing an expanded workload with its other essential missions and tasks and determining the price of the motors. Furthermore, additional analysis is important to address several unknowns, such as whether or not a potential commercial provider could take control of the motors earlier on in the process or how, if at all, the RSLP office may be affected if demand for surplus ICBM motor-based launch vehicles is different than expected. Because the use of surplus ICBM motors for commercial launches may create competition for existing and emerging commercial space launch providers, the degree to which these benefits and challenges are realized largely depends on price at which DOD sells the motors. Specifically, the effects are likely to be more significant if DOD sells the motors at a price that allows one or a small number of providers to offer launch services at prices lower than those of existing and emerging commercial launch providers. Commercial Use of ICBM Motors Has Several Potential Benefits Increased Launch Capability, Competitiveness, and Flexibility Allowing the use of surplus ICBM motors for commercial space launch could increase domestic launch capability to low-Earth orbit and the global competitiveness of U.S. launch companies. Additionally, ICBM motor-based vehicles could provide launch customers with more options from which to choose launch services as well as greater schedule flexibility. Since 2014, U.S. companies have captured a greater share of the global launch market through, in part, the introduction of new large payload launch vehicles. Currently, however, domestic commercially available launch vehicles in the small to medium payload launch capability cost more per kilogram of payload than similar options from foreign launch providers. Per kilogram launch prices, as shown in table 2, are estimates based on publicly available information. According to Department of Transportation officials, the price a launch customer ultimately pays depends on a range of factors, including mission objectives and market conditions. Launch prices may also differ between primary and secondary payloads. According to DOD, since fiscal year 2006, foreign launch providers have aggregated about 100 U.S.-manufactured payloads that could have been launched by a U.S. launch vehicle. It is not clear how many ICBM-based commercial launches would have been needed to serve these 100 payloads, as approximately 90 percent of them were 150 kg or less and thus were likely aggregated to varying degrees on a medium or large launch vehicle. However, the benefit of increasing U.S. launch service global competitiveness as a result of allowing commercial use of surplus ICBM motors may be limited. According to the FAA, domestic launch capability may soon become more robust and competitive because of the emergence of new space launch providers and vehicles. Specifically, several U.S. companies developing launch vehicles and related technologies to provide low- and medium-weight payload launch capabilities began conducting test flights in 2017 and some plan to begin carrying commercial payloads as early as next year. Table 17 in appendix II provides a list of emerging launch vehicle providers. Allowing the use of surplus ICBM motors for commercial launches could further benefit the commercial space sector by providing U.S.-based launch customers more options and flexibility from which to choose launch services. As we reported in July 2016, customers select from available launch providers based on a number of factors, including price, capability, and reliability of the launch vehicle. Another factor that customers consider is the location of the launch. Some satellite manufacturers and other potential customers of launch services we interviewed said they prefer a domestic provider and may be willing to pay a higher price for a domestic launch depending on factors such as lower licensing costs and simpler, lower-cost logistics. Increasing the domestic launch capability could also help launch customers gain greater schedule certainty. Specifically, launch customers may have stringent schedule requirements and must select a launch vehicle that will deliver their payload to the desired orbit. According to several experts we interviewed, customers with smaller payloads often choose to launch as a secondary or aggregated payload on a larger launch vehicle. They said that this likely reduces the customer’s costs, since aggregated payload customers share launch costs, but it may also create schedule uncertainty. For example, customers who choose to aggregate their payload with others may not be able to determine the planned launch date, which may then be delayed if one or more of the other aggregated payloads are not ready in time. In contrast, as a sole or primary payload, the customer typically has more control over the launch schedule. However, there are indications that the domestic launch market is already responding to this type of customer demand. Several emerging space launch providers we spoke with said providing customers greater schedule flexibility is a key component of their business strategy and an important factor that distinguishes them from many existing launch providers. Increased Demand for Newly Manufactured Solid Rocket Motors The use of surplus ICBM motors for commercial space launches could provide the U.S. solid rocket motor industrial base a limited benefit by increasing demand for newly manufactured solid rocket motors, helping ensure that these assets remain available to support government space launches and other missions. As we have reported in the past, Congress and DOD have identified the health of the solid rocket motor industrial base as an area of concern. According to Institute for Defense Analyses research, there were six U.S. solid rocket motor suppliers in the mid-1990s, but these six are now consolidated into two remaining entities—Aerojet Rocketdyne and Orbital ATK. These manufacturers are dependent on a single U.S. company to provide the main ingredient for the solid rocket motor propellant, ammonium perchlorate. In recent years, demand for solid rocket motor fuel has dropped because, among other things, NASA retired its Space Shuttle program in 2011, which historically comprised most of the demand for solid rocket motors. According to the Institute for Defense Analyses, the Space Shuttle program was responsible for as much as 90 percent or more of the total U.S. demand for large solid rocket motor propellant from 1990 to 2010. However, the Institute reported that since the end of the Space Shuttle program, annual demand for solid rocket motor propellant has dropped from more than 20 million pounds to about 5 million pounds and is expected to drop further, creating excess capacity throughout the solid rocket motor industrial base. Since at least 2009, DOD has identified the solid rocket motor industrial base as being particularly risky for programs that require solid rocket motors because decreased production has made the remaining two solid rocket motor suppliers even more vulnerable, among other things. However, according to NASA officials, they expect demand for solid rocket motor propellant to increase in the future as their Space Launch System becomes operational. Each Space Launch System launch requires two solid rocket motors that each contains approximately one million pounds of propellant, for a total usage of two million pounds per launch. Increased use of surplus ICBM motors for commercial or government launch could provide a limited benefit to the solid rocket motor industrial base through greater use of manufacturing equipment, personnel, and processes. Due to differences in design and payload delivery, surplus ICBM motors cannot be used for higher stage propulsion on space launch vehicles. As a result, the use of an ICBM motor-based launch vehicle creates demand for one or more newly manufactured higher stage solid rocket motors. For example, the Minotaur I and Minotaur IV launch vehicles require one to two newly manufactured higher stage solid rocket motors. However, the commercial use of surplus ICBM motors, in comparison to historical demand from NASA and other government missions, is likely to have a limited benefit on the solid rocket motor industrial base. For example, the two higher stages of a Minotaur I or the single higher stage of a Minotaur IV use 10,362 and 1,698 pounds of propellant, respectively. In comparison, each of the two reusable solid rocket motors that launched the final version of NASA’s Space Shuttle burned about 1.1 million pounds of propellant. At most, depending on the type of launch vehicle, the use of surplus ICBM motors for commercial launch would create demand for two to four newly manufactured higher stage solid rocket motors and roughly 3,400-20,700 pounds of propellant. Increased Opportunities for DOD Personnel with Specialized Skills Greater use of surplus ICBM motor-based launch vehicles could support U.S. efforts to assure its access to space by increasing opportunities for DOD personnel to utilize specialized skills. Air Force officials told us that increased use of surplus ICBM motors, whether for government or commercial launch, would provide program personnel beneficial opportunities to exercise specialized skills, including those for handling, transferring, and modifying surplus ICBM assets. DOD and Air Force officials we spoke with said that a consistent launch rate is an important factor in helping maintain these specialized skills. Since 2000, the government has used ICBM motor-based launch vehicles 15 times in support of a variety of space launch missions; however, the launch pace has declined. For example, from 2000 to 2011, the Air Force launched 13 space missions using ICBM motor-based launch vehicles. From 2012 to present, the Air Force launched 2 space missions and plans to launch 1 mission each year in 2017 and 2018. One of these planned launches includes the Operationally Responsive Space-5 mission planned for August 2017. If government launch demand continues at a rate of one to two launches per year or declines further, commercial launches using surplus ICBM motors would provide additional opportunities for DOD to employ skilled personnel. However, Air Force officials recently told us that there are indications of additional government demand for launches using surplus ICBM motors beginning around 2020 that could increase the launch rate to two to three per year (an increase that would require processing two to six additional motors). Potential Cost Avoidance for DOD The use of ICBM motors for commercial launches could provide a benefit to DOD by offsetting some of the roughly $17 million DOD spends each year storing, maintaining, and destroying surplus ICBM motors. DOD incurs additional costs for refurbishing and ensuring the flightworthiness of motors selected for space launch. These costs are reimbursed by the government customer and vary by mission. Currently, the cost to ensure the safety of motors in storage—approximately $2.8 million for Peacekeeper and Minuteman II motors—is divided among government launch customers. According to the Air Force, allowing commercial launch providers to purchase and use surplus ICBM motors would further disperse these costs, slightly lowering launch costs to the government. However, the cost of storing surplus ICBM motors in bunkers cannot be directly apportioned to each motor and DOD would achieve savings only over many years. Specifically, RSLP office officials told us that they typically group surplus motors according to stage and store them as a set of six or more in bunkers. Therefore, to achieve a meaningful reduction in storage costs, they said that DOD would need to eliminate the inventory of an entire bunker of similar motors. For example, clearing a single bunker of six first stage motors would require six separate space launches. Commercial Use of ICBM Motors Has Several Potential Challenges Decline in Private Investment Opportunities for Commercial Launch Providers Allowing the use of surplus ICBM motors for commercial launch could create challenges for the commercial space sector, such as discouraging private investment for emerging commercial space companies currently developing launch vehicles. Investment in new commercial space companies has grown significantly in recent years and this trend appears to be accelerating. From 2000 through 2015, new commercial space companies received about $13.3 billion in investments, including debt financing, and approximately 20 percent of that total investment came in 2015, the most recent year for which data are available. Officials from several commercial space launch companies, both existing and emerging, we spoke with cited these levels of increased investment as evidence that validates current law and policy that limit the use of surplus ICBM motors to special circumstances. Several commercial launch providers and an organization representing several hundred investors in commercial space companies told us that allowing the use of surplus ICBM motors for commercial launches would add uncertainty to the commercial space market, complicating their business decisions. According to the Tauri Group, which gathers data and conducts analyses for several federal agencies, investors in commercial space companies consider the potential demand for a company’s product or service when deciding whether to make an investment. Representatives from the Tauri Group and an emerging launch provider told us that changing law and policy to allow the commercial use of surplus ICBM motors could also discourage investment by signaling to companies and investors that the government may intervene in the market in the future. NASA officials said that allowing the use of these motors may stifle innovation in the commercial space sector. They told us that emerging small launch providers are basing their business cases on making innovations in manufacturing and design of liquid rocket engines and that if emerging launch providers fail due to market pressures caused by the use of surplus ICBM motor-based launch vehicles, innovation in propulsion may be stifled. Disruption of Competition Opportunities for Emerging Launch Providers Allowing the use of surplus ICBM motors for commercial launches could disrupt competition among emerging providers. Several respondents to the Air Force’s August 2016 request for information stated that the availability of surplus ICBM motors may erect unfair barriers to entry in the space launch market and discourage investment in new space launch vehicles. We found that the price for a launch using surplus ICBM motors—assuming the motors are available to commercial customers at a breakeven price roughly equal to DOD’s costs to prepare the ICBM motors for launch—would be about $27,000 per kilogram of payload for Peacekeeper motor-based vehicles and about $71,000 per kilogram for Minuteman II motor-based vehicles. These estimates assume the launch vehicles are at full capability. Although the actual price of a launch varies based on several factors, including the providers’ strategies for increasing their share of the launch market, these price estimates may be useful in illustrating how the potential prices of a surplus ICBM motor- based launch vehicles might compare with those of existing and emerging providers. Table 3 presents the price per kilogram for a Peacekeeper- based and Minuteman II-based launch vehicle. The price per payload kilogram for a launch using Peacekeeper based launch vehicle is in a similar range with the planned future launch prices for emerging launch vehicle providers. Emerging providers developing launch vehicles with capability between 1 kilogram and 2,000 kilograms are estimating their launch prices to be between about $10,000 and $55,000 per payload kilogram. Among these emerging providers, several estimate their launch prices will be between $20,000 and $30,000 per kilogram. Many of the emerging providers are designing small launch vehicles to provide dedicated launch services. However, one company we spoke with plans to start with a small launch vehicle but eventually offer launches for medium payloads on new, non-ICBM motor-based vehicles at capabilities close to a Peacekeeper motor-based vehicle. As satellite manufacturers examine the launch vehicle market from which they will choose a launch, they may choose the slightly lower priced ICBM motor- based vehicle to aggregate multiple small satellites on one launch vehicle at its highest capability. Conversely, the sale of surplus ICBM motors at the breakeven price for the surplus motors and the cost of higher stage propulsion may not have an effect on existing providers because the price per payload kilogram of a launch vehicle using surplus Peacekeeper motors—approximately $27,000 at its highest capability—is still significantly higher than the launch prices of most existing launch providers, with some exceptions. However, the extent of effects to existing providers would be dependent on the price that the launch provider chooses to offer. A commercial launch provider is likely unwilling to purchase Minuteman II motors at the breakeven price we calculated because the estimated price per payload kilogram of a launch using Minuteman II motors is significantly higher than and not competitive with both emerging and most existing launch vehicles. However, a commercial launch provider may be willing to purchase Peacekeeper motors at the breakeven price we calculated and sell launches for an aggregation of small payloads, or compete at the low end of the medium class launch market to launch the smaller medium class payloads. Table 17 in appendix II provides the price per payload kilogram for emerging launch vehicles in development, and Table 4 below provides the price per kilogram for existing commercially available launch vehicles. According to Department of Commerce officials, allowing commercial launch providers to purchase surplus ICBM motors at a low price may also disrupt competition by lowering costs for one or a small number of launch providers. While the Air Force could make surplus motors available to a broad range of commercial launch providers, because of technical and financial barriers, the companies most likely to purchase the motors are those who have existing launch vehicles that can accommodate the motors without major modification. Only one company we spoke with expressed interest in purchasing surplus ICBM motors and has the capability to integrate the motors in an existing launch vehicle. Several other companies we spoke with told us that, in comparison to newer, less expensive liquid propulsion motors, surplus ICBM motors use old technology, and that it would not be cost effective for them to develop new launch vehicles or convert existing launch vehicles to use the motors. Specifically, representatives from one company we spoke with estimated that it could cost $50 million to $60 million to convert an existing launch system in order to use surplus ICBM motors as lower stage propulsion. Moreover, these representatives said that designing and building a new launch system that uses surplus ICBM motors could cost nearly $400 million. Limited Motor Processing Resources Challenge DOD DOD may encounter challenges balancing an expanded workload with its other essential missions and tasks. Specifically, DOD officials told us that their current year plans call for refurbishment of about 11 motors for government missions and static fire tests, but that this level of productivity has created schedule pressures and required personnel to work overtime. Despite these efforts, officials said they expect delivery of one motor to be at least 2 weeks late. Although RSLP officials estimate that they have the capacity to process about 15 motors per year based on historical information and current personnel resources, it would be challenging to maintain this level of productivity. RSLP officials told us that increasing their office’s ability to process more than 15 motors per year would require significant additional resources. For example, officials said that hypothetically doubling the number of motors they are able to process each year would require adding a minimum of five persons to the program office, investments in additional handling and testing equipment costing about $2 million, and investments of about $5 million in buildings and equipment to expand motor transfer capacity at Hill Air Force Base. Additional analysis of the detailed costs and payment arrangements with potential commercial providers is important in order to determine how such increases would affect the breakeven prices for the Air Force. Furthermore, officials said that negotiating terms and conditions with commercial launch providers and then executing related financial transactions would be challenging without additional resources. Uncertainties Pose Challenges for Effective Decision Making Officials conducting the Air Force study told us that they would need to gather additional details and conduct additional analysis beyond the price of the motors if law and policy are changed to allow surplus ICBM motors to be made available for commercial launches, as illustrated in the examples below: Study officials said that further analysis would be required for allowing commercial providers to take control of the surplus ICBM motors prior to delivering the motor to the launch site for integration into the launch vehicle. Although the RSLP office performs a significant amount of refurbishment work on the motor, an RSLP official told us some of the work could be performed by the commercial entity after transfer. However, according to Air Force officials conducting the study, its analysis of the potential use of surplus ICBM motors for commercial launches assumes that the Air Force maintains control and supervision of engines, including all necessary maintenance, refurbishment, and transfer activities, until the motors are delivered to the launch vehicle at the launch site. These officials said that this is to ensure that the motors remain safe and to avoid a breach of non- proliferation obligations and treaties. A DOD official told us that if a commercial provider were to take possession of the motors earlier in the transfer process, the Air Force would need to determine how national security, particularly treaty obligations and other munitions control regimes, would be affected. Additionally, RSLP officials told us that transferring the motors earlier on in the process may affect their costs; however, such a determination requires additional analysis. Although the Air Force has developed estimates for Peacekeeper motor storage and disposal costs, because of budget structures and a healthy motor stockpile, the detailed costs are unknown. For example, the cost to store Peacekeeper motors at Hill Air Force base is paid for, among many other things, by the Hill Air Force Base budget, not by the RSLP office. Additionally, no Peacekeeper motors have been disposed of due to aging issues, so the RSLP office has not yet determined, in detail, the exact process for safely destroying a Peacekeeper motor. RSLP officials told us that a “pathfinder” study would be required to establish the requirements and details to dispose of a Peacekeeper motor at an additional cost of about $500,000. As Peacekeeper motors begin to fail testing and the stockpile shows signs of aging, there may be budget and workforce implications, as the stockpile is depleted more quickly than currently planned. Further analysis on the potential effects to RSLP office costs and personnel is important if future demand for government missions that use surplus ICBM motor-based launch vehicles is different than expected. As discussed earlier in this report, future demand for launches using surplus ICBM motors would affect the potential costs and benefits to the Air Force. For example, under a scenario in which there is low government demand for ICBM motor-based launches, allowing commercial use of ICBM motors may provide a benefit to the Air Force by helping it maintain its specialized workforce and facilities. However, the Air Force recently updated its estimates of future demand to include several planned government missions that will likely use ICBM motor-based launch vehicles at a level sufficient to keep its workforce in use without commercial launches. The Air Force plans to complete its study on the potential effects of allowing surplus ICBM motors to be used for commercial launch later this year. Because its study is not completed, it is not clear the extent to which it addresses these uncertainties. A detailed analysis of future potential scenarios is important for informing the way forward for the Air Force if law and policy change to allow surplus ICBM motors to be sold to commercial providers for launch. After the Air Force completes its study, the House Armed Services Committee Report to a bill for the fiscal year 2017 NDAA contained a provision for us to conduct an assessment of DOD’s report. As such, we are not making recommendations in this report. Agency Comments We are not making recommendations in this report. We provided a draft of this report to the Departments of Defense, Commerce, State, and Transportation; and NASA for comment. We received technical comments on this report from the Departments of Defense, Commerce, and Transportation; and NASA, which we incorporated as appropriate. The Department of State did not provide comments. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense, the Secretary of the Air Force, the NASA Administrator; and the Secretaries of Commerce, Transportation, and State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The National Defense Authorization Act (NDAA) for Fiscal Year 2017 contains a provision for us to analyze the potential effects of allowing the use of surplus Intercontinental Ballistic Missile (ICBM) motors for commercial launch purposes, including an evaluation of the effects, if any, of allowing such use on national security, the Department of Defense (DOD), the solid rocket motor industrial base, the commercial space launch market, and any other areas the Comptroller General considers appropriate. We assessed: (1) the options for determining the selling prices for these motors; and (2) the potential benefits and challenges, including savings and costs, of allowing surplus ICBM motors to be used for commercial space launch. To address these objectives, we reviewed relevant laws and policies, such as the Commercial Space Act of 1998 and National Space Transportation Policies from 1994 through 2013. We also met with officials at the office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) and its office of Manufacturing and Industrial Base Policy, the office of the Principal DOD Space Advisor, Missile Defense Agency, the Assistant Secretary of the Air Force for Space Acquisition (SAF/AQS), and the Air Force Rocket Systems Launch Program (RSLP) office. Additionally we met with officials at the Departments of Commerce, Transportation, and State; and National Aeronautics and Space Administration (NASA). We interviewed and reviewed documentation submitted to us by commercial space industry stakeholders, including launch providers, satellite manufacturers, and an earth observation services company. We met with one venture capitalist whose company makes investments in the commercial space industry, an association of launch providers, and independent experts and researchers with expertise in the commercial space launch industry and history of surplus ICBM motors. We also reviewed all 16 industry stakeholder responses to an August 4, 2016 Air Force request for information on issues related to potentially making surplus ICBM motors available to providers for commercial launch. To determine the options for pricing surplus ICBM motors, we used generally accepted economic principles for benefit-cost analysis, including Office of Management and Budget Circulars A-4 and A-94 to conduct an assessment of potential economic effects. To begin that assessment, we developed breakeven prices for which DOD could offer the motors. These breakeven prices, reflecting our calculations of the net present value of motor storage and disposal costs, are those at which DOD must sell the motors to cover the costs of transferring them to commercial launch providers. We did not include sunk costs, as in the costs incurred to develop, produce, and sustain the surplus ICBM motors, but, rather, we calculated potential prices based on fulfillment cost, which includes all costs that an entity will incur in fulfilling the promises that constitute a liability. These costs are the value to the entity of the resources that will be used in liquidating the entity’s assumed liability. We collected from the Air Force detailed cost information associated with storing, testing, refurbishing, and disposing surplus ICBM motors, and discussed with Air Force officials their future launch and motor disposal plans. We also obtained and evaluated motor transfer costs from two recent and ongoing Air Force launch missions because they represent the most current cost data on transferring surplus ICBM motors for launch— that is, moving motors out of storage and preparing them to be integrated into a launch vehicle at the launch site. A detailed, step-by-step analysis of our economic assessment is provided in the following section. Additionally, we reviewed Federal Accounting Standards Advisory Board (FASAB) criteria to understand other methods for determining the value and price of surplus ICBM motors. To determine the potential benefits and challenges of allowing surplus ICBM motors to be used for commercial launches, we conducted an assessment of potential economic effects, detailed below, using Office of Management and Budget (OMB) guidance and generally accepted economic principles for conducting a cost benefit analysis. We reviewed DOD’s documentation on storing, maintaining, refurbishing, and transferring surplus ICBM motors. For example, we reviewed RSLP program office briefing slides and SAF/AQS responses to our questions. We interviewed DOD officials who were familiar with the use of surplus ICBM motors for government launches under current law and who would be responsible for their potential use to provide commercial launch. To obtain a range of perspectives on the effects of changes to law and policy covering the transfer of surplus ICBM motors, we identified representation among specific categories of industry stakeholders from which to select industry stakeholders for interviews. These categories covered (1) launch vehicle providers, (2) spaceports, (3) venture capitalists, (4) industry groups, (5) solid rocket motor industrial base, and (6) payload satellite companies. We interviewed 20 stakeholders and entities that represented a wide range of viewpoints on potential change in policy for surplus ICBM motors to be used for commercial use. We analyzed the interview results to identify patterns and themes and attributed the responses to the categories of entities rather than to individuals or specific organizations. These stakeholder views are not generalizable across the industry but do provide a range of perspectives. We estimated potential launch prices for launch vehicles using surplus ICBM motors at our calculated breakeven motor prices and compared these prices to publicly available prices of current and emerging commercial launch vehicles. Additionally, we reviewed and applied federal guidance on conducting cost-benefit analyses. To assess the reliability of the motor storage and disposal cost data, we analyzed Air Force headquarter and RSLP’s program office data, interviewed DOD officials, and reviewed detailed cost and launch schedule documentation. We corroborated the data, when possible, across at multiple Air Force and RSLP sources and conducted additional follow up interviews to gain additional insight. To assess the reliability of commercial space launch price and forecast data, we interviewed officials at the Federal Aviation Administration, the Tauri Group (which collects worldwide commercial space launch data), and corroborated the data with space launch experts when possible. We determined that the data were sufficiently reliable for presenting the information contained in this report. Economic Assessment of Potential Effects To address both of our study objectives, we conducted an economic assessment of potential effects using generally accepted economic principles for cost benefit analysis, including Office of Management and Budget Circulars A-4 and A-94. Specifically, we identified the key elements needed for analyzing the potential economic effects by; (1) defining our objective and scope; (2) conducting an analysis of effects, where we described our analytical choices and assumptions used; (3) considering a range of alternatives; (4) conducting a breakeven price analysis; (5) conducting a sensitivity analysis; and (6) documenting our analysis. Objective and Scope: This element of an economic assessment is the first step in assessing potential economic effects and helped us determine a method by which to develop a potential surplus ICBM motor price and a range of potential effects. The scope of our review consisted of both surplus Peacekeeper and Minuteman motors currently in inventory because DOD officials told us these are the only motors relevant for consideration for potential commercial use in the United States market, may be used for testing, or destroyed. According to the Air Force, at the time of our review, there were 186 Peacekeeper motors and 537 Minuteman II motors in inventory. Analysis of Effects and Assumptions: This element helped us determine a method by which to develop potential surplus ICBM motor prices and a range of potential effects. During this step, we described the data, analytical choices, and assumptions used. We used motor cost and storage data provided by the RSLP office as well as information about future launch demand using surplus ICBM motor-based vehicles. We chose to analyze the difference between two scenarios: the status quo and a potential future where motors may be used for commercial launches. Specifically, we calculated the net present value of storage and disposal costs under a status quo scenario reflective of current law and policy that allows some government launches using surplus ICBM motors. We then compared the status quo model to the net present value of storage and disposal costs under a range of scenarios that may reflect a change in law and policy to allow surplus motors to be used for commercial launches. We made several assumptions, such as, the number of test fires consisting of one motor every year, the real discount factors for the surplus ICBM motors, and the number of motors being launched for government missions until the stockpile was depleted. We made these assumptions based on information provided by the RSLP office. The assumptions are identified in table 5 below: Range of Alternatives: We considered a range of relevant alternatives. Because some costs vary depending on the number of projected launches of ICBM-based launch vehicles, we calculated a range of potential breakeven prices based on, among other things, low, medium, and high launch rates. We started by modeling the current expected numbers of future launches and decommissioning of motors and compared them with scenarios that assume different commercial and government demand for motors and calculated the associated annual cost to store and decommission motors. We analyzed the storage and disposal costs for the 186 Peacekeeper and 537 Minuteman motors in inventory, and estimated breakeven prices under the three different launch rate scenarios. Breakeven Price Analysis: As one method of pricing the motors, we calculated breakeven prices—the prices at which DOD covers its costs for transferring the motors for launch minus the costs avoided (or saved) depleting the stockpile more quickly. In other words, this is the potential price at which DOD could break even. We estimated a breakeven price where DOD would be indifferent between continuing with the current schedule or providing commercial launches. We calculated the total net present value of motor storage and disposal costs and then broke the total down to a per motor set cost savings and subtracted that from the cost to transfer a set of motors to a government entity for a launch. In conducting our breakeven analysis for the surplus ICBM motors, we relied primarily on RSLP office data. We did not develop our own source data for development of motor prices although we did develop assumptions about future government and commercial launch demand based on RSLP program office statements. We determined that the breakeven price for three Peacekeeper motors ranges from about $8.3 to $8.4 million. The breakeven price for two Minuteman motors ranges from about $3.9 million to $4 million. Table 6 shows the breakeven price for a set of three Peacekeeper motors and a set of two Minuteman II motors, along with the cost of commercial higher stage motors. Tables 7 through 12 show the step-by-step results of calculating the per motor net present value of storage and disposal costs for each motor type under the status quo and commercial scenarios followed by the total cost avoidance and resulting breakeven price per motor set— that is, for three Peacekeeper motors and two Minuteman II motors. Peacekeeper motors The current net present value storage and disposal cost of the status quo (only government launches of surplus ICBM-based vehicles) is between $10.3 million and $15.8 million. It would take DOD 13 to 22 years to deplete the ICBM inventory. Tables 7 and 8 show for Peacekeeper and Minuteman II, respectively, the net present value of the storage and disposal costs under the status quo scenario, with no commercial launches, for the motor stockpile as follows: Minuteman II motors The current net present value storage and disposal cost of the status quo is between $36.4 million and $38.7 million, and it would take DOD 9-10 years to deplete the ICBM inventory. Tables 9 and 10 show calculations of the total net present value of storage and disposal costs for each motor type in the stockpile under a commercial launch scenario across a range of assumed numbers of government and commercial launches. If commercial launches are allowed, the net present value of storage and disposal costs of Peacekeeper motors for DOD would be $8.5 million to $10.3 million with the rest of the cost being absorbed as a discount from the price for commercial launches, and it would take 11 to 13 years to deplete the inventory of ICBM motors. If commercial launches are allowed, net present value of storage and disposal cost for Minuteman II motors for DOD would be $35.5 million to $36.4 million with the rest of the cost being absorbed as a discount from the price for commercial launches, and it would take 9 years to deplete the inventory of surplus ICBM motors. In order to find the prices at which the Air Force would be indifferent between maintaining the status quo and adding commercial launches, we found the difference between the net present value of the storage and disposal costs and then broke those costs down by motor set, as shown in tables 11 and 12. Total Peacekeeper storage and disposal savings per motor set increases as demand for launches decreases because of the growth in the number of years it takes to deplete the stockpile. Total Minuteman II motor storage and disposal savings per motor set decreases as demand decreases because of the relatively low cost to store the motors, the large number of motors that are decommissioned each year —about 50—and the relatively small difference in number of years that it takes to deplete the stockpile of those motors. Sensitivity Analysis: We used the sensitivity analysis element of an economic assessment to determine how the use of surplus ICBM motors for commercial launch may affect the commercial launch industry. We compared the cost per payload kilogram of an ICBM motor-based launch vehicle to the price per kilogram of commercial launches with publicly available price data. According to generally accepted economic principles, a sensitivity analysis adequately justifies the data, analytical choices, and assumptions used. It explicitly addresses how plausible adjustments to each important analytical choice and assumption affect the economic effects and results of the comparison of alternatives. Our analysis assessed quantitatively how the variability of the key uncertain data underlying the estimates of economic effects—like future launch demand and capacity that may be used on a launch vehicle—affects these economic effects and the results of the comparison of alternatives. The sensitivity analysis shows that the resulting price per kilogram is sensitive to assumptions about launch demand, resulting storage and disposal cost avoidance, and the resulting number of years until the motor stockpile is depleted. The parameters and results of our sensitivity analysis are below: Peacekeeper Motors Sensitivity Analysis: Levels of government motors launched: 12 (high), 9 (mid), 6 (low) Levels of commercial motors launched: 3, 6, 6 Payload capacity of 500 kilograms, 1,000 kilograms, and 1,800 kilograms. Minuteman II Motors Sensitivity Analysis: Levels of government motors launched: 8 (high), 6 (mid), 4 (low) Levels of commercial motors launched: 2, 4, 4 Payload capacity of 100 kilograms, 300 kilograms, and 500 kilograms. We used a range of potential payload capacities that could be launched by a Peacekeeper or Minuteman II based launch vehicle and applied this variable across a range of potential future launches using the motors. We estimated the launch prices per kilogram by first adding to the breakeven prices the costs of the required higher stage propulsion and then applied a 20 percent factor to the totals in order to calculate launch prices. According to the Air Force, propulsion costs historically account for 20 percent of a launch vehicle’s price and this is a reasonable assumption for projecting prices. Tables 13 and 14 show the resulting prices per launch after adding the higher stage and ICBM motor propulsion breakeven prices and applying the 20 percent factor. We then divided the launch prices by the range of payload capacity. Tables 15 and 16 show the price per kilogram of launches using Peacekeeper and Minuteman II motors, respectively. Documentation of Our Analysis: We concluded our assessment of potential surplus ICBM motor prices and resulting effects by documenting our analysis. Based upon the results of our assessment, the analysis can inform decision-makers and stakeholders about the potential economic effects of the action examined. We conducted this performance audit from June 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Existing Global Commercial Launch Vehicles Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Cristina T. Chaplain (202) 512-4841 or chaplainc@gao.gov. Staff Acknowledgments In addition to the contact named above, Rich Horiuchi (Assistant Director), Erin Cohen (Analyst-in-Charge), Pedro Almoguera, Andrew Berglund, Stephanie Gustafson, Wendell Keith Hudson, Jordan Kudrna, and Jacqueline Wade made key contributions to this report.
The U.S. government spends over a billion dollars each year on launch activities as it strives to help develop a competitive market for space launches and assure its access to space. Among others, one launch option is to use vehicles derived from surplus ICBM motors such as those used on the Peacekeeper and Minuteman missiles. The Commercial Space Act of 1998 prohibits the use of these motors for commercial launches and limits their use in government launches in part to encourage the development of the commercial space launch industry in the United States. Legislative and policy changes would be needed to allow DOD to sell these motors for use on commercial launches. The National Defense Authorization Act for Fiscal Year 2017 contains a provision for GAO to analyze the potential effects of allowing the use of surplus ICBM motors for commercial space launch. This report addresses (1) the options for pricing surplus ICBM motors; and (2) the potential benefits and challenges of allowing surplus ICBM motors to be used for commercial space launch. GAO used Office of Management and Budget criteria to develop a range of breakeven prices, collected detailed motor storage and disposal costs from the Air Force, reviewed industry stakeholder responses to an Air Force request for information about other pricing methods, and interviewed DOD and industry officials. The Department of Defense (DOD) could use several methods to set the sale prices of surplus intercontinental ballistic missile (ICBM) motors that could be converted and used in vehicles for commercial launch if current rules prohibiting such sales were changed. One method would be to determine a breakeven price. Below this price, DOD would not recuperate its costs, and, above this price, DOD would potentially save. GAO estimated that DOD could sell three Peacekeeper motors—the number required for one launch, or, a “motor set”—at a breakeven price of about $8.36 million and two Minuteman II motors for about $3.96 million, as shown below. Other methods for determining motor prices, such as fair market value as described in the Federal Accounting Standards Advisory Board Handbook, resulted in stakeholder estimates ranging from $1.3 million per motor set to $11.2 million for a first stage Peacekeeper motor. The prices at which surplus ICBM motors are sold is an important factor for determining the extent of potential benefits and challenges of allowing the motors to be used for commercial launch. Potential benefits include increasing the global competitiveness of U.S. launch services, and Potential challenges include affecting private investment negatively, hindering innovation, and disrupting competition among emerging commercial space launch companies; and expanding the workload of the Air Force program office responsible for maintaining and refurbishing the motors. Further, uncertainties in underlying assumptions and cost estimates—such as Peacekeeper motor storage and disposal costs—could hinder effective decision making. DOD is also conducting a study on the potential effects of allowing surplus ICBM motors to be used for commercial launch. Because DOD's study is not completed, it is not clear the extent to which its study addresses such uncertainties.
Background The federal government has long been a participant in addressing risks that private property-casualty insurers have been unable or unwilling to insure. One of these risks is damage due to flooding. While NFIP backs the flood insurance policy, it generally contracts the sale and servicing of the policies out to private property-casualty insurers, known as WYO insurance companies. About 96 percent of NFIP’s policies are sold and serviced by WYO insurers. For a given property, the WYO insurer writing and administering the flood insurance policy on behalf of NFIP may also provide coverage for wind-related risks on the same property. Through its program contractor, FEMA operates a reinspection program to monitor and oversee claims adjustments and address concerns about flood payments. The reinspection program’s activities encompass reevaluating the flood adjustments and claims payments made on damaged property to determine whether NFIP paid the proper amount for flood-related damages. The program conducts on-site reinspections and reevaluations of a sample of flood claim adjustments. Determining the cause and extent of damages is primarily the job of insurance adjusters, who are either employed or contracted by insurance companies and generally licensed by the states. Adjusters assess damage; estimate losses; and submit required reports, work sheets, and photographs to the insurance company, which reviews the claims and approves them for payment. In general, insurance adjusters are paid on a percentage basis or fee schedule tied to the amount of damages. These adjusters can fall into several categories: Staff (or company) adjusters are employees of insurance companies who determine the amount of damages payable on claims under a contract of insurance. Independent adjusters and adjuster firms are contractors that insurance companies hire to assess damages and determine claims losses. Emergency adjusters are sometimes allowed by states to operate on a temporary basis to further augment the force of adjusters following a catastrophe. Public adjusters are hired by and work on behalf of property owners to assess damages and help prepare claims. Insurance adjusters are regulated by the states, which have been granted authority by Congress to oversee insurance activities. The federal government retains the authority to regulate insurance, giving primary responsibility for insurance regulation to the states in accordance with the McCarran-Ferguson Act of 1945. State insurance regulators’ oversight includes requirements pertaining to the licensing and training of insurance adjusters. In addition, adjusters that have been licensed or allowed to operate by a state can also be certified as flood adjusters by NFIP to assess flood damages on properties. A property owner who has experienced hurricane damages can initiate a flood insurance claim by contacting the insurance agent of the WYO insurer that sold the NFIP flood policy. The agent relays the claim information to the WYO insurer, which assigns a flood claims adjuster to the case. The adjuster will then inspect the property to determine the damage caused by flooding and the extent to which that damage is covered under the flood policy. To help carry out this work, insurance adjusters commonly use software that organizes the damage information and estimates the repair or replacement costs for such damages. Factors utilized in determining loss estimates include the square footage of the building; the type of building materials; and the cost of materials and repairs at the market rate, which is subject to change. Once the assessment of a damaged property is complete, the adjuster files a report with the WYO insurance company, which reviews the claim and approves or denies it for payment to the policyholder (see fig. 1). Likewise, for wind-related damage claims on hurricane-damaged properties, property owners can contact the insurance agent or company that sold them their property-casualty policy to start the claims process. For some property owners, their property-casualty insurer for wind- related risks is the same company that serves as NFIP’s WYO insurer. In such cases, both the wind and flood insurance policies will be processed by the same insurer. In other cases, where the property-casualty insurer is a different company than the WYO insurer, claims for wind and flood damages will be processed separately by different insurers. Both the insurance industry and NFIP incurred unprecedented storm losses from the 2005 hurricane season. State insurance regulators estimated that property-casualty insurers had paid out approximately $22.4 billion in claims tied to Hurricane Katrina (excluding flood) as of December 31, 2006. However, industry observers estimate that insured losses tied to Hurricane Katrina alone (other than flood) could total more than $40 billion, depending on the outcome of outstanding claims and ongoing litigation. FEMA estimated that NFIP had paid over $15.7 billion in flood insurance claims from Hurricane Katrina as of August 31, 2007, encompassing approximately 99 percent of all flood claims received. As of September 2007, FEMA had about 68 employees, assisted by about 170 contractor employees, to manage and oversee the NFIP and the National Flood Insurance Fund, into which premiums are deposited and claims and expenses are paid. Their management responsibilities include establishing and updating NFIP regulations, analyzing data to determine flood insurance rates, and offering training to insurance agents and adjusters. In addition, FEMA and its program contractor are responsible for monitoring and overseeing the quality of the performance of the WYO insurance companies to assure that NFIP is administered properly. We have recently completed related work highlighting concerns with payment formulas for services rendered by WYO insurers. We are also engaged in other ongoing work focused on reviewing various aspects of the oversight of WYO insurers. Potential Coverage Gaps and Claims Uncertainties Can Arise When Homeowners Have Multiple Policies That Cover Different Perils Insurance coverage for hurricane damages commonly requires the purchase of multiple insurance policies—a general homeowners policy, an NFIP policy, and in some areas, a special policy for wind damage. But even with these policies, homeowners cannot be certain that all damage resulting from a hurricane will be covered because the areas and limits of coverage differ across policies. Further, because both homeowners and NFIP policies can be serviced by a single WYO insurer, a conflict of interest exists during the adjustment process. Since Hurricane Katrina, legal disputes have been ongoing between property-casualty insurers and policyholders over damage determinations and the interpretation of policy language concerning coverage for damages that may have resulted from both wind and flooding. Covering Hurricane Damages Often Requires Two or More Policies with Different Limits and Coverage Property owners cannot currently purchase a single insurance policy for all hurricane-related damages because policies offered by property- casualty insurers generally exclude coverage for flood damage and sometimes may exclude coverage for wind-related damage. Property owners in flood-prone areas frequently have at least two insurance policies—for example, a homeowners policy from a private insurer and a flood insurance policy backed by NFIP. Additionally, on certain properties in coastal areas, private insurers sometimes exclude from homeowners policies coverage for wind-related damage, requiring policyholders to either pay an additional premium for wind-related risks on their primary policy or to purchase a separate supplemental policy for wind-related damages. In such cases, this supplemental coverage is typically provided by a state-sponsored wind insurance pool that has been created to address shortages in the availability of insurance for wind-related risks. Moreover, some property owners may also have excess flood insurance if the value of their home exceeds the coverage limits offered by NFIP. Private property-casualty insurance policies differ from the government- sponsored flood insurance policy in several ways. For example, key differences exist between the level of coverage offered by NFIP and that offered under common homeowners policies. Available coverage for damages under an NFIP policy is limited by law to $250,000 for the structure and $100,000 for contents, although the replacement cost value of some homes exceeds such limits. Generally, private homeowners policies can cover the replacement cost value of the house, and coverage may be obtained to insure personal property, including outside property and personal belongings (e.g., trees, plants, decks, and fences), in contrast to an NFIP policy. Further, while homeowners policies often provide coverage for additional living expenses if a house is rendered uninhabitable, NFIP does not insure policyholders for such coverage, although such expenses may be offset through other disaster assistance provided by FEMA. Insurance Coverage Gaps, Claims Adjustment Uncertainties, and Conflicts of Interest Can Materialize When Two or More Policies Cover One Event Property owners do not know in advance whether their insurance policies will cover all damages from a hurricane, because the payments ultimately will depend on the extent to which each policy will cover the damages— that is, whether the damages are determined to be the result of hurricane winds, flooding, or some combination of both. Even property owners that purchase the maximum amount of flood insurance available through NFIP, along with other private insurance for wind-related risks, do not know whether they are completely covered until the insurers’ claims adjusters determine what caused the damage. Given the differences between the coverage offered under flood insurance and the coverage offered by private property-casualty insurance, the damage determinations can be crucial. For example, a homeowner whose house is worth $450,000 may have both a flood insurance policy and wind coverage, but flood insurance covers only up to $250,000 in damages. If damages to the policyholder’s house are severe, and all of it is determined to be from flooding, the property owner may not receive enough compensation to fully rebuild and pay for temporary housing under the terms of the NFIP flood policy. But if all of the damages are determined to have been caused by wind, the homeowner may be able to fully recoup their losses and additional living expenses. Hence, insurance coverage uncertainties can arise when hurricane damages occur. Claims adjustment uncertainties include challenges that can arise in assessing and adjusting damages due to wind and flooding when the evidence of damage at the damage scene is limited or compromised. As a result of the magnitude and severity of damage from Hurricanes Katrina and Rita, evidence of the damaged structures was often limited or compromised. In some cases, buildings were completely destroyed, leaving little except the foundations. Insurance claims adjusters and industry participants we spoke with acknowledged that assessing the cause and extent of damages was more problematic when little evidence of the structure was left. Exacerbating such difficulties was the fact that adjusters commonly arrived on the damage site several weeks after Hurricane Katrina occurred, given the scope of damage. During the time between Hurricane Katrina and the arrival of the adjusters, the remaining evidence at damage scenes may have been further compromised by subsequent natural and man-made events (such as the clearing of debris from streets and roadways). Finally, there is an inherent conflict of interest when the same insurer is responsible for assessing damages for its own property-casualty policy, as well as for the NFIP policy, each covering different perils on the same property. As part of the WYO arrangement, private property-casualty insurers are responsible for selling and servicing NFIP policies, including performing the claims adjustment activities to assess the cause and extent of damages. When the WYO insurer writes and services its own policy, along with the NFIP policy for the same property, the insurer is responsible for determining the cause of damages and, in turn, how much of the damages it will pay for and how much NFIP will cover. In certain damage scenarios, the WYO insurer that covers a policyholder for wind losses can have a vested economic interest in the outcome of the damage determination that it performs when the property is subjected to a combination of high winds and flooding. In such cases, a conflict of interest exists with the WYO insurer as it determines which damages were caused by wind, to be paid by itself, and which damages were caused by flooding, to be paid by NFIP. Moreover, the amount WYO insurers are compensated for servicing a flood claim also increases as the amount of flood damage on a claim increases—an allowance of 3.3 percent of each claim settlement amount. Legal Disputes Involving Policy Coverage Have Arisen Since the 2005 Hurricane Season In the aftermath of the 2005 hurricane season, legal disputes emerged between policyholders and insurers that centered largely on the extent to which damages would be covered under a homeowners policy, as distinct from an NFIP policy, when both high winds and flooding occurred. Such disputes have been and continue to be argued and resolved though state and federal courts, as well as through mediation programs. Many of these cases have concerned the interpretation and/or enforceability of certain property-casualty policy language in the context of challenging the cause of the damages or losses. For example, some disputes have raised the question of whether a policy’s flood exclusion language clearly excluded the water-related event, such as storm surge, that caused the damages at issue. Other cases have challenged the enforceability of a property-casualty policy’s anti-concurrent causation clause. Such a clause generally provides that coverage is precluded for damage caused directly or indirectly by an excluded cause of loss (for example, flood), regardless of any other cause (for example, wind) that contributes concurrently to or in any sequence with the loss. Many of these cases are still working their way through the judicial trial and appeals processes and will eventually be resolved based on the particular language of the policy, the evidence presented by both the policyholders and the insurers, and the governing state law. State mediation efforts have been initiated to help address the backlog of unresolved claims between policyholders and insurance companies on private homeowners policies. These programs, particularly in Louisiana and Mississippi, have played a major role in facilitating many settlements of residential property insurance claims arising out of Hurricanes Katrina and Rita. Established after the 2005 hurricane season, these programs offer policyholders and insurers a nonbinding, alternative dispute resolution procedure to resolve claims and avoid the delays, expenses, and uncertainties of resolving the disputes through the courts. On the whole, state insurance regulators in Mississippi and Louisiana report that the majority of cases brought to mediation have been resolved. Lack of Uniformity in Licensing and Training Requirements among States Creates Uncertainties about Some Adjusters’ Qualifications In spite of the importance of the insurance claims adjuster to policyholders after a national catastrophe, licensing and training requirements for adjusters vary considerably by state. Some states have no requirements for insurance claims adjusters, others have them for most types of adjusters, and many states have them for some types of adjusters but not for others. This lack of uniformity results in uncertainties over the qualifications and training of claims adjusters. Further, states may temporarily relax these requirements after a catastrophe. Claims adjusters who adjust flood insurance claims, however, must be trained and certified by NFIP. Following Hurricane Katrina, some states that lacked licensing requirements for adjusters passed laws to raise the level of oversight for adjusters. States’ Licensing and Training Requirements for Claims Adjusters Vary Widely During our review, we found that adjuster licensing and training requirements varied considerably among states, including those along the Gulf Coast. Of the eight coastal states we contacted, most had varying degrees of licensing and training requirements for different types of adjusters during the 2005 hurricane season (Florida, Georgia, Mississippi, North Carolina, South Carolina, and Texas), while two states (Louisiana and Alabama) had no examination or continuing education requirements for claims adjusters at that time. Some of the coastal states had also instituted some common licensing requirements for staff adjusters, independent adjusters, and public adjusters, while others had varying requirements for different types of adjusters. Similarly, information gathered from industry representatives showed that licensing and training requirements varied substantially among the states nationwide. Figure 2 summarizes the varying level of requirements for claims adjusters among several coastal states, as well as recent legislation enacted in some of the coastal states impacted by Hurricane Katrina to strengthen their requirements. For coastal states with licensing and training requirements for claims adjusters, a state licensing examination has been the principal oversight tool used to regulate the entry of adjusters into the marketplace. According to insurance regulators, the state licensing exam typically includes questions on insurance regulation, adjusting practices, and different kinds of insurance policies. Some states also require a certain level of continuing education before a license can be renewed, while others do not. Continuing education requirements also vary among states for different types of adjusters. For the states we contacted, continuing education requirements were mixed, with some of the states requiring a certain level of continuing education for some types of adjusters, while other states did not have continuing education requirements. For example, during the 2005 hurricane season, staff and independent adjusters employed in Florida and Texas were required to take at least 24 hours of continuing education every 2 years, while other coastal states had no continuing education requirements for some types of adjusters. Motivated largely by concerns about the adjustment process, some states that were impacted by the 2005 hurricanes enacted legislation to raise their level of oversight for adjusters. When Hurricane Katrina hit, Louisiana did not regulate any types of adjusters, and adjusters were able to conduct business there without a license. In 2006, the Louisiana State Legislature passed, and the governor signed, The Louisiana Claims Adjuster Act, which required that staff and independent adjusters become licensed beginning on June 30, 2007. Like other states, the Louisiana Department of Insurance will issue nonresident adjusters a reciprocal license as long as they are currently licensed in their home states. In Mississippi, legislative proposals were also introduced for additional oversight requirements for public adjusters. After Hurricane Katrina, the state of Mississippi allowed public adjusters to work in the state under an emergency provision approved by the Insurance Commissioner. In 2007, the Mississippi State Legislature passed, and the governor signed, a bill to allow public insurance adjusters to operate in the state permanently and have their practices regulated, a change that requires these adjusters to get certifications, licenses, and continuing education. In addition to licensing and training requirements, some state regulators we contacted also said they relied on insurance companies’ quality control measures to help ensure the quality of adjusters. Insurance companies and adjuster firms generally provide some degree of in-house or external training for their adjusters, according to industry participants. However, insurance companies and adjuster firms we contacted generally declined to share company-specific instructions and manuals for their insurance claims adjusters, citing proprietary concerns. In contrast to the varying requirements for claims adjusters among the states, NFIP conducts limited but uniform mandatory training to certify individuals as flood adjusters. Flood adjusters must be trained and certified annually. In addition, FEMA provides ongoing oversight of NFIP claims adjustments through its claims reinspection program. However, because independent claims adjusters must be licensed by a state to be certified as a flood adjuster, the underlying qualifications and training for adjusters that seek to become flood adjusters remain varied, as they depend on the state. In the absence of uniform state standards for claims adjusters, neither NFIP, state insurance regulators, nor policyholders can be certain of the minimum qualifications held by a claims adjuster assigned to a particular property, increasing the possibility of inconsistent claims adjustments and payments for similarly damaged properties. States May Waive Requirements for Adjusters During Emergencies, Potentially Magnifying the Impacts of Varied State Standards Given the lack of uniformity for adjuster licensing and training requirements among states, the qualifications and level of training of the adjusters called upon in catastrophe situations can vary considerably. A state’s normal oversight requirements for claims adjusters can be weakened by nonresident licensed adjusters that are allowed to operate from states with less stringent requirements. Further, while most states have some adjuster licensing and training requirements that are applicable to some types of adjusters, these oversight measures can be waived in emergency situations, as they were in the aftermath of Hurricane Katrina. The majority of states allow nonresident adjusters to operate within state borders as long as the adjusters are licensed in other states. However, differences in the qualifications and training of adjusters allowed to operate in a state can materialize when this practice of reciprocity occurs in the absence of uniform regulatory requirements. In most of the coastal states we reviewed, nonresident adjusters were exempted from taking the licensing exams if they were licensed in their home state. Although some states have similar licensing examination requirements, oversight of adjusters, nevertheless, lacks uniformity. Issues related to the quality and consistency of regulatory requirements for insurance claims adjusters across states also exist in other aspects of insurance regulation. For other regulatory functions—such as the licensing of insurance agents—many states accept licenses from other states as long as those states reciprocate. As we have reported in other work, success with state reciprocity of licensing functions depends on the adequacy and uniformity of requirements among states. In the absence of adequate and consistent licensing requirements, reciprocity can reduce one state’s level of oversight to the more limited standards of another. Additionally, all of the coastal states we contacted had provisions for allowing “emergency adjusters” to augment the normal force of adjusters by waiving the normal licensing and training requirements for adjusters, if warranted by the scope of damage. Accordingly, coastal states most impacted by Hurricane Katrina invoked emergency procedures to allow additional adjusters to operate in their states without having to meet the normal licensing and training requirements. However, a state’s oversight requirements for claims adjusters may be weakened when nonresident licensed adjusters from states with less stringent requirements are allowed to operate in states with higher standards. During our review, insurance regulatory officials and industry participants and observers acknowledged possible inconsistencies and errors in adjustments that arose, given the shortage of adjusters and the varying qualifications of those that worked in the aftermath of Hurricane Katrina. Some states have attempted to address concerns and uncertainties over the qualifications of emergency adjusters with other varied approaches. For example, Florida, Mississippi, North Carolina, and Texas require that work performed by emergency adjusters be reviewed and certified by a sponsoring licensed adjuster or insurance company. North Carolina has set minimum guidelines for certifying adjusters on an emergency basis that take into account, for instance, their level of experience. South Carolina requires that emergency adjusters file an adjuster licensing application, while Louisiana, which had no oversight requirements for emergency adjusters during the 2005 hurricane season, now requires emergency adjusters to register their name and employment contact information but imposes no other requirements. State insurance regulators can also use market conduct examinations to further scrutinize a company’s claims adjustment processes. As we have reported in previous work, state practices for market conduct exams vary widely and are not always performed on a routine basis by most insurance departments. However, most states can initiate targeted examinations to assess certain company activities if they receive consumer complaints suggesting a potential issue. The types of consumer complaints received by state insurance regulators include those related to the denial of claims, the untimely processing of claims, and the misrepresentation of coverage. Some states had initiated market conduct examinations on selected companies to assess their claims handling activities tied to the 2005 hurricane season and subsequent consumer complaints. For example, state insurance regulators in Louisiana conducted several market conduct examinations on various insurers. However, according to state regulators, these examinations were focused on evaluating the timeliness of claims payments in accordance with state statutes, rather than examinations on the wind versus flood issue. In Mississippi, state regulators mentioned that market conduct examinations pertaining to claims processing activities following Hurricane Katrina were still ongoing. Lack of Relevant Claims Data Limits FEMA’s Ability to Oversee Hurricane Damage Assessments Limited data are available for evaluating the damage assessments and claims payments when properties are subjected to both high winds and flooding and the extent of damage caused by each peril is difficult to determine. Data collected by NFIP from WYO insurers—including those that serviced both NFIP flood policies along with their own policies for wind-related risks on the same properties—include only information on damage deemed by the WYO insurers to have been caused by flooding. This limited information prevents NFIP from knowing how each peril contributed to the total damages in order to verify that flood insurance claims payments were accurate. The lack of data also limits FEMA’s reinspection program because the wind damage information is relevant to understanding how all perils contributed to damages when certain properties were subjected to both high winds and flooding. Further, the lack of transparency over the extent of wind damage deemed to have contributed to total damages limits FEMA’s ability to address conflicts of interest that arise if the WYO insurer is also the wind insurer on the property. FEMA and NFIP program officials have stated that they do not have the authority to access data on wind claims for NFIP-insured properties. NFIP program contractors also stated they cannot access WYO insurers’ policies, procedures, or instructions describing to adjusters how wind damage should be determined in conjunction with flood damage when properties are subjected to both perils. NFIP Generally Lacks Needed Data on Wind Damage Claims for Properties That It Insures NFIP does not systematically collect and analyze data on wind-related damage when collecting flood claims data on properties subjected to both high winds and flooding, such as those damaged in the aftermath of Hurricanes Katrina and Rita. Further, NFIP has not sought such information even when the same insurance company serves as both the NFIP WYO insurer and the insurer for wind-related risks. WYO insurers are required to submit flood damage claims data in accordance with NFIP’s Transaction Record Reporting and Processing (TRRP) Plan for inclusion in the NFIP’s claims database. In our review of data elements in NFIP’s claims database, we found that NFIP does not require WYO insurers that are responsible for adjusting flood claims to report information on property damages in a manner that could allow NFIP to differentiate how these damages (to the building or its contents) were divided between wind and flooding. Specifically, the TRRP Plan for WYO insurers instructs them to include only flood-related damage in the data fields on “Total Building Damages” and “Total Damage to Contents.” Further, the “Cause of Loss” data field does not incorporate an option to explicitly identify property damages caused or partially caused by wind. As a result, WYO insurers do not report total property damages in a manner that 1) identifies the existence of wind damage or 2) discerns whether damages were divided between wind and flooding for properties that were subjected to a combination of both perils. Further, NFIP program contractors stated that they did not systematically track whether the WYO insurer processing a flood claim on a property was also the wind insurer for that property. This lack of information limits FEMA’s ability to adequately oversee the WYO insurers and verify that damage paid for under the flood policy was caused only by the covered loss of flooding. In past years, the determination over the cause of damages has been an issue. For example, as we reported in 2005, following Hurricane Isabel, one of the reasons that claims for additional losses were not paid was because damage was not due to flooding, but wind-driven rain. NFIP’s normal claims processing activities were stressed during the 2005 hurricane season. For both Hurricanes Katrina and Rita, FEMA estimates that NFIP has paid approximately $16.2 billion in claims, with average payments exceeding $95,000 and $47,000, respectively. As we reported in December 2006, in an effort to assist policyholders, FEMA approved expedited NFIP claims processing methods that were unique to Hurricanes Katrina and Rita. Some expedited methods included the use of aerial and satellite photography and flood depth data in place of a site visit by a claims adjuster for properties that likely had covered damages exceeding policy limits. Under other expedited methods, FEMA also authorized claims adjustments without site visits if only foundations were left and the square-foot measurements of the dwellings were known. Such expedited procedures facilitated the prompt processing of flood claims payments to policyholders, but once these flood claims—and others— were processed, NFIP did not systematically collect corresponding wind damage claims data on an after-the-fact basis. Without information on both wind and flood damages to certain properties subjected to both perils, NFIP has reduced assurances that the amounts it paid for flood claims were actually limited to flood damage. FEMA officials stated that they do not have access to wind damage claims data from the WYO insurers. Accordingly, NFIP does not systematically collect data on wind damage for properties for which a flood claim has been received. Rather, FEMA officials maintain that they review the quality of claims adjustments through their reinspection program and periodic operational reviews of companies. FEMA officials that we contacted expressed different opinions concerning the need for the authority to obtain wind-related data. While some FEMA and NFIP contract officials stated that having the authority to obtain and analyze wind-related claims information would be helpful in reviewing claims, other senior FEMA officials questioned the usefulness of such information, maintaining that existing oversight activities are generally sufficient without an additional review of wind-related claims data. Without analyzing wind-related claims information, however, FEMA’s oversight process is limited for determining whether the inherent conflict of interest that exists when a WYO insurer services its own policy and the flood insurance policy on the same property is adversely affecting claims determinations. This concern has also been noted in a Department of Homeland Security’s Office of Inspector General’s interim report, which stated, “NFIP oversight focused primarily on whether the flood claim was correctly adjudicated with little or no consideration for wind damage as a contributing factor.” The work being performed by the Office of Inspector General also includes subpoenaing wind claims information from WYO insurers to reevaluate wind versus flood determinations. This work was ongoing as of the time this report was being completed. FEMA’s Reinspection Program Has Limited Ability to Validate the Accuracy of Payments on Certain Hurricane- Damaged Properties Given the Lack of Information Available on Wind-Related Damage Claims FEMA’s reinspection program, which reevaluates the adjustment process and flood payments made, does not collect information that could help enable FEMA to validate the claims payments on certain hurricane- damaged properties. The reinspection program does not systematically evaluate the apportionment of damages between wind and flooding, even when a conflict of interest exists with a WYO insurer. For example, the program does not have a means of identifying whether wind-related damage contributed to losses on the properties it evaluates or the extent of such losses. Without the ability to examine damages caused by both wind and flooding in some cases, the reinspection program is limited in its ability to assess whether NFIP paid only the portion of damages it was obligated to pay under the flood policy. During our study, we reviewed 740 reinspection files for properties with flood claims associated with Hurricanes Katrina and Rita. We found that most of these files did not document a determination of whether or not damages were caused by a combination of wind and flooding and did not adequately document whether the claim paid actually reflected only the damage covered by the flood insurance policy versus damage caused by other uncovered damages, such as wind. Rather, the files contained limited and inconsistent documentation concerning the presence or extent of wind-related damage on properties and lacked the documentation that would have enabled NFIP to verify that damages paid for under the flood policy were caused only by the covered loss of flooding. Specifically, the reinspection activities focused on reevaluating the extent to which building and content damages were caused by flooding in the absence of information concerning wind-related damage. While some of the files documented damages that had been caused by a combination of wind and flooding, most did not. Around two-thirds of the 740 reinspection files did not indicate whether the damages had been caused only by flooding or by a combination of wind and flooding and did not include enough documentation for a reviewer to make such a determination. Approximately 26 percent of the files indicated that the damages were caused only by flooding, and 8 percent indicated that the damages were caused by a combination of wind and flooding. When NFIP program contractors conducting the reinspections did indicate that damages were caused by a combination of wind and flooding, insufficient documentation existed to determine the extent to which the wind damage contributed to total property damages and, hence, the accuracy of the flood damage claim. Concerning the lack of wind damage claims data available to NFIP, we found that hurricane claims data gathered separately by state insurance regulators were of limited value for understanding how wind and flooding contributed to property damages. In the aftermath of Hurricanes Katrina and Rita, state insurance regulators in Alabama, Florida, Louisiana, Mississippi, and Texas jointly established a data call mechanism to collect aggregate claims data associated with the storms reported by property- casualty insurers. But such data were of limited value for assessing how wind and flooding contributed to damages because this information lacked sufficient geographic detail to be matched with corresponding flood claims data on a community-level (e.g., zip-code) or property-level basis. Rather, claims data reported by property-casualty insurers were reported on a statewide and county- or parish-level basis for different elements. As a result, the hurricane claims data collectively gathered by state insurance regulators would have been of limited benefit to NFIP to understand how both wind and flooding contributed to property damages. State insurance regulators, through NAIC, are currently developing specifications and exploring the feasibility of collecting more geographically detailed information for an updated disaster reporting system based on lessons learned from recent hurricanes and comments from interested parties about monitoring insurance claims following a natural disaster. In the aftermath of the 2005 hurricane season, the NFIP reinspection process was also challenged by the severity and scope of the damages. Many properties were completely destroyed, making damage determinations and reevaluations of such determinations difficult. The on- site reinspections of properties with flood claims associated with Hurricanes Katrina and Rita were generally conducted several months after the event—delays that were to some extent understandable, considering the magnitude of the devastation. But the delays further limited FEMA’s ability to reevaluate the quality and accuracy of the initial damage determinations, given the ongoing natural and man-made events that continued to alter the damage scenes. Additionally, we have previously reported that FEMA did not choose statistically valid random samples of the universe of all closed claims for its reinspection process. Therefore, the results of the reinspections could not be projected to the universe of properties for which flood claims were made. Accordingly, we have previously recommended that FEMA select a statistically valid sample of reinspections for its reinspection program. FEMA has agreed to implement this recommendation. Finally, NFIP program contractors responsible for administering the reinspection program also mentioned that they do not have access to WYO companies’ adjusting policies, procedures, and instructions to assess the guidance provided to their adjusters (company staff or contracted) for discerning and quantifying the damages caused by wind versus flooding. The lack of information on the specific methodologies and instructions conveyed by WYO insurers to their force of adjusters diminishes the transparency over how damages were discerned between wind and flooding on hurricane-damaged properties and the extent to which these instructions are consistent with or at odds with FEMA’s instructions to adjusters. Absent such information along with the wind-related claims data, FEMA’s oversight of the NFIP WYO insurers to assess the accuracy of flood claims payments is limited, particularly in cases where the WYO insurer is also the wind insurer on the same property. Conclusions Resolving the unique insurance issues posed by hurricanes requires actions to address numerous uncertainties. The NFIP must balance pressures to quickly pay claims to policyholders with ensuring that it is enforcing the terms of the flood policy. Uncertainties involved in this process begin with the extent of covered damages from multiple policies, contingent on the damage scenario, and continue with the claims adjustment and regulatory oversight activities that follow. As we have seen, policyholders do not know in advance of a hurricane the extent to which damages will be covered because the amount of insured losses depends on whether it is a multiperil event, how much of the damages are caused by wind and how much by flooding, and how policy language will be interpreted in accordance with relevant state laws. Other concerns can also materialize when the WYO insurer determines not only the damage caused by flooding that is covered by the flood policy, but also the damage caused by wind that is covered under its own property-casualty policy, creating an inherent conflict of interest that must be managed or mitigated. In the aftermath of Katrina, policyholders and insurance companies were and continue to be uncertain as to how current language on property-casualty insurance policies will be interpreted, and numerous lawsuits continue to make their way through federal and state courts. Once an event has occurred, other uncertainties arise concerning the qualifications and training of claims adjusters. State licensing and training requirements vary considerably, and standards that do exist may be relaxed or eliminated after a major catastrophe, depending on the scope of damage. Additionally, uncertainties remain over the probability of accurately discerning the extent to which damages were caused by wind versus flooding on certain hurricane-damaged properties. The difficulty in performing this task can increase when evidence remaining at the damage scene is limited or compromised. Not surprisingly, the variations in adjusters’ qualifications, coupled with limited or compromised evidence at damage scenes, foster debate and uncertainty over the way damage determinations are made, the consistency of adjustments for similarly damaged properties, and how losses are apportioned between flood and wind insurers. In the absence of uniform state standards for claims adjusters, state insurance regulators, as well as policyholders, cannot be certain of the minimum qualifications or level of professional training of a claims adjuster assigned to a particular property, increasing the possibility of inconsistent claims adjustments and payments for similarly damaged properties. Uncertainties are also present in the oversight of claims adjustment processes, given the lack of information concerning both wind and flood damage claims for certain hurricane-damaged properties. FEMA cannot be certain of the quality of NFIP claims adjustments allocating damage to flooding in cases where damages may have been caused by a combination of wind and flooding because NFIP does not systematically collect and analyze both types of damage claims data together on a property-level basis. Although FEMA officials believe they can verify the accuracy of flood claim payments without the wind data, there are situations where additional information is warranted. Without information on the wind damage claims adjustments prepared by WYO insurers at the time they submit flood claims on hurricane-damaged properties, FEMA lacks controls to independently assess whether or not the apportionments between flood and wind damage appear reasonable. FEMA officials have determined that they currently lack the authority to access the WYO insurers’ claims data and guidance to adjusters for wind-related claims to evaluate the reasonableness of the flood claims for properties that were also subject to damage from high winds. Hence, for a given property, NFIP does not know how each peril contributed to the total property damages or how adjusters working for the WYO insurers made such determinations. As a result, FEMA cannot be certain whether NFIP has paid only for damage caused by flooding when insurers with a financial interest in apportioning damages between wind and flooding are responsible for making such apportionments. Matters for Congressional Consideration To strengthen and clarify FEMA’s oversight of WYO insurers, particularly those that service both wind and flood damage claims on the same property, we recommend the Congress consider giving FEMA clear statutory access to: both wind and flood damage claims information available from NFIP’s WYO insurers in cases in which it is likely that both wind and flooding contributed to any damage or loss to covered properties, enabling NFIP to match and analyze the wind and flood damage apportionments made on hurricane-damaged properties in a systematic fashion, as appropriate; and the policies, procedures, and instructions used by WYO insurers and their adjusters for both flood and wind claims to assess and validate insurers’ claims adjustment practices for identifying, apportioning, and quantifying damages in cases where there are combined perils. Recommendation for Action We recommend that state insurance commissioners, acting through NAIC, enhance the quality and consistency of standards and oversight for all types of claims adjusters among states through more stringent and consistent licensing and training requirements for adjusters, including, in those states where appropriate, training to assess and apportion damages due to wind, flooding, or both. Agency Comments and Our Evaluation We requested comments on a draft of this report from FEMA and NAIC. The Department of Homeland Security provided written comments on a draft of this report, which have been reprinted in appendix II. FEMA concurred with our recommendation to strengthen licensing requirements for adjusters but disagreed with the matters for congressional consideration to give FEMA clear statutory authority to obtain 1) wind damage claims information available from WYO insurers and 2) the policies, procedures, and instructions used for determining wind damage versus flood damage when properties are subjected to both perils. In oral comments, NAIC expressed general agreement with the draft’s findings and recommendations. In addition, both FEMA and NAIC provided technical comments, which we have incorporated as appropriate. FEMA stated that it believed existing oversight measures for NFIP and WYO insurers were sufficient and that statutory access to wind and flood damage claims information from NFIP WYO insurers would place an unneeded burden and cost on NFIP. FEMA also stated that it did not believe NFIP needs the wind estimate or data to determine the amount of flood damage that occurred. It also noted that additional unnecessary costs would be incurred to access and analyze wind damage claims information from WYO insurers. We disagree. Because of the inherent conflict of interest that exists when WYO insurers are the property- casualty insurers for wind claims and are also responsible for servicing the flood claims on the same properties, FEMA must ensure that its internal controls are sufficient to minimize the potential adverse impacts of this conflict on the accuracy of damage determinations and flood claims payments. Accurately determining claims payments is particularly important, given the likely eventuality that FEMA would need to draw on the U.S. Treasury to pay flood losses that exceed the funds available from premiums. We do not suggest that FEMA collect and analyze wind claims data for each claim or even each flood event. Rather, we recommend that FEMA have the ability to access wind damage claims information when it is available from the WYO insurer—that is, in circumstances when the WYO insurer is responsible for servicing both the wind and flood policies on the same property and when uncertainties exist, such as when the physical evidence has been compromised or limited physical evidence remains. Obtaining wind damage claims information that is already available from WYO insurers establishes proper transparency over the adjustment process when both wind and flooding contribute to damages without an unreasonable or costly burden. As long as a conflict of interest exists with a WYO insurer that services its own policy for wind-related risks along with the NFIP flood policy on the same property, additional controls are warranted. When properties are subjected to both wind and flood perils, particularly in cases where uncertainties exist due to limited or compromised evidence at the damage scene, collecting enough information to understand whether or not the WYO insurer is also the wind insurer for the same property and, if so, the extent of damage it determined to be caused by wind versus flooding, is key to maintaining transparency over the adjustment process. Furthermore, when the same insurance company has already determined the amount of damage caused by wind and flooding for a given property, obtaining and assessing this available information should not be cost prohibitive for FEMA or WYO insurers. The authority to access policies, procedures, and guidance used for determining wind versus flood damage would enable FEMA to have a more complete understanding about how concurrent damages are handled by the WYO insurers. Such information would strengthen FEMA’s oversight and ability to identify abuses and better ensure the accuracy of flood payments made. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of FEMA; the Chief Executive Officer of NAIC; the Chairman of the House Committee on Financial Services; the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the House Committee on Homeland Security; the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs; and other interested committees and parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To evaluate how key insurance coverage issues can arise when multiple insurance plans are tied to a hurricane-damaged property, we contacted and collected information from the Federal Emergency Management Agency (FEMA), National Flood Insurance Program (NFIP) contractors, state insurance regulators, the National Association of Insurance Commissioners (NAIC), property-casualty insurers, state-sponsored wind insurers, insurance agents, claims adjusters, industry associations, and mediators. This work encompassed reviewing key areas and limits of coverage from insurance policies offered through NFIP and property- casualty insurers to identify potential gaps in coverage that can arise based on the terms of such policies and the nature of the damage. Additionally, we reviewed the roles and responsibilities of write-your-own (WYO) insurers that service NFIP policies to identify whether a conflict of interest exists with a WYO insurer in certain circumstances. To evaluate state insurance regulators’ oversight of the licensing and performance of loss adjusters, we contacted and collected information from state insurance regulators, NAIC, property-casualty insurers, state- sponsored wind insurers, claims adjusters, and industry associations. We collected and compared licensing and training requirements for claims adjusters provided by state insurance regulators in several coastal states, incorporating information on requirements that existed prior to the 2005 hurricane season, as well as subsequent legislation enacted by some coastal states to strengthen oversight requirements for adjusters. We also discussed the activities, challenges, and damage scenarios encountered by claims adjusters in the aftermath of recent hurricanes with state regulators, FEMA and NFIP program officials, and industry participants. We also requested information from some property-casualty insurers and claims adjustment firms on their guidance (policies, procedures, manuals, and instructions) to claims adjusters on how to discern and quantify wind versus flood damages when properties are subjected to both perils. Industry participants declined to provide such information, citing proprietary concerns and ongoing litigation. This work included on-site fieldwork in Florida, Illinois, Louisiana, Mississippi, North Carolina, Pennsylvania, South Carolina, and Texas. To evaluate the completeness of the information that NFIP collects and analyzes in order to determine whether damage determinations and flood payments made accurately reflect the actual distribution of losses between wind and flooding, we reviewed claims information collected by NFIP from WYO insurers serving the flood claims. This work included reviewing the type of information routinely collected from WYO insurers through NFIP’s Transaction Record Reporting and Processing (TRRP) Plan. In addition, we obtained information on FEMA’s reinspection program that is used to reevaluate the quality of NFIP claims that have been processed. We assessed the type of information used by NFIP to validate the damage determinations made by WYO insurers, reviewing a statistically valid sample of files (740) of reinspections that NFIP conducted on selected properties from Hurricanes Katrina and Rita. We also reviewed hurricane claims data collectively gathered by several state insurance regulators to ascertain the extent to which such information would be useful for assessing wind versus flood damage determinations made on properties. We conducted our review between May 2006 and November 2007 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Lawrence D. Cluff, Assistant Director; Tania Calhoun; Emily Chalmers; Rudy Chatlos; Chir-Jen Huang; Barry Kirby; Kristopher Natoli (intern); and Melvin Thomas made key contributions to this report.
Disputes between policyholders and insurers after the 2005 hurricane season highlight the challenges in understanding the cause and extent of damages when properties are subjected to both high winds and flooding. Questions remain over the adequacy of steps taken by the Federal Emergency Management Agency (FEMA) to ensure that claims paid by the National Flood Insurance Program (NFIP) cover only those damages caused by flooding. GAO was asked to evaluate (1) issues that arise when multiple insurance policies provide coverage for losses from a single event, (2) state regulators' oversight of loss adjusters, and (3) information that NFIP collects to assess the accuracy of damage determinations and payments. GAO collected data from FEMA, reviewed reinspection reports and relevant policies and procedures, and interviewed state regulatory officials and others about adjuster oversight and NFIP. Insurance coverage gaps and claims uncertainties can arise when coverage for hurricane damage is divided among multiple insurance policies. Coverage for hurricanes generally requires more than one policy because private homeowners policies generally exclude flood damage. But the extent of coverage under each policy depends on the cause of the damages, as determined through the claims adjustment process and the policy terms that cover a particular type of damage. This process is further complicated when the damaged property is subjected to a combination of high winds and flooding and evidence at the damage scene is limited. Other claims concerns can arise on such properties when the same insurer serves as both NFIP's write-your-own (WYO) insurer and the property-casualty (wind) insurer. In such cases, the same company is responsible for determining damages and losses to itself and to NFIP, creating an inherent conflict of interest. Differences in licensing and training requirements for insurance claims adjusters among states also create uncertainties about adjusters' qualifications. Prior to the 2005 hurricane season, some coastal states had few or no requirements, while others had requirements for most types of adjusters. Further, states can waive their normal oversight requirements after a catastrophic event to help address demand, as they did after Hurricane Katrina. As a result, significant variations can exist in the qualifications of claims adjusters available after a catastrophic event. Strengthened and more uniform state requirements for adjusters could enhance the qualifications of the adjuster force in future catastrophes and improve the quality and consistency of claims adjustments. NFIP does not systematically collect and analyze both wind and flood damage claims data, limiting FEMA's ability to assess the accuracy of flood payments on hurricane-damaged properties. The claims data collected by NFIP through the WYO insurers--including those that sell and service both wind and flood policies on a property--do not include information on whether wind contributed to total damages or the extent of wind damage as determined by the WYO insurer. The lack of this data also limits the usefulness of FEMA's quality assurance reinspection program to reevaluate the accuracy of payments. In addition, the aggregate claims data that state insurance regulators collectively gathered after Hurricanes Katrina and Rita were not intended to be used to assess wind and flood damage claims together on a property- or community-level basis. Further, FEMA program contractors do not have access to WYO insurers' policies, procedures, and instructions that describe to adjusters how wind and flood damages are to be determined when properties are subjected to both perils. FEMA officials stated that they did not have the authority to collect wind damage claims data from insurers. But without the ability to examine claims adjustment information for both the wind and flood damages, NFIP cannot always determine the extent to which each peril contributed to total property damages and the accuracy of the claims paid for losses caused by flooding.
Background About 19 percent of the nation’s adults and 21 percent of youths ages 9 to 17 have mental disorders at some time during a 1-year period. Among adults, about 5 percent have severe mental disorders, and nearly 3 percent have mental disorders that are both severe and persistent. Mental disorders include a wide range of specific conditions of varying prevalence. For example, chronic mild depression and major depressive disorders collectively affect about 10 percent of all adults during a 1-year period, and attention deficit/hyperactivity disorder affects about 4 percent of youths age 9 to 17 during a 6-month period. Table 1 indicates the prevalence of selected mental disorders, each of which affects more than 1 million adults in a given year. Health insurance is an important factor influencing whether individuals with mental disorders have access to treatments that can be effective in diminishing the symptoms of disorders and improving patients’ quality of life. Absent treatment, according to the surgeon general, many individuals with mental disorders may suffer increased incidents of lost productivity, unsuccessful relationships, and significant distress and dysfunction. Untreated mental disorders among adults can also have a significant and continuing effect on children in their care. Many Americans Rely on the Individual Health Insurance Market Although the majority (68 percent) of Americans under age 65 have employer-sponsored group coverage, a significant minority (5 percent, or 12.6 million) relied on private, individual health insurance as their only source of coverage in 2000. Individuals with certain labor force or demographic characteristics are more likely to depend on individual coverage than the general population. For example, 14 percent of workers in agriculture, forestry, and fisheries, and 19 percent of the self-employed, relied exclusively on individual health coverage in 2000. Moreover, the individual insurance market is an important source of coverage for early retirees—people in their fifties and early sixties who are not yet eligible for Medicare. About 13 percent of retirees between 50 and 64 had individual health insurance as their sole source of coverage in 2000. Moreover, federal and state laws provide certain guarantees for eligible individuals moving from group to individual coverage. Portability provisions established by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) guarantee access to coverage for certain individuals leaving qualified group coverage. To implement these portability requirements, states adopted different approaches, typically including guaranteed coverage by individual market carriers or enrollment in a state high-risk pool. To be HIPAA-eligible, individuals must meet certain requirements, including exhausting any group continuation coverage available under the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) or state law. The Individual Health Insurance Market Differs from the Group Market Important differences exist between the individual and group health insurance markets. Unlike employer-sponsored group coverage, where eligibility in a group is guaranteed by federal and state laws and premiums are generally based on the risks associated with a group of beneficiaries, eligibility and initial premiums in the individual markets of many states are based largely on an individual’s health status and risk characteristics. Also, unlike group markets, in which employers generally subsidize premiums, individuals must pay the full cost of their health insurance premiums. Finally, while both federal and state governments regulate group coverage, individual coverage is regulated almost exclusively at the state level. Individual market carriers are concerned about the potential for adverse selection. Adverse selection occurs when people who believe they are healthy refrain from purchasing individual market coverage because of its high cost and unsubsidized nature. If healthy people refrain from purchasing coverage, high-risk individuals may make up a disproportionate share of those seeking to purchase individual coverage, causing claims costs to rise. Carriers may then need to raise premiums to compensate. Responding to the higher premiums, healthier members of the pool may disenroll, resulting in an increasing spiral of higher risks and higher costs. To mitigate the potential for adverse selection, carriers in most states are permitted to use medical underwriting—that is, evaluate the health status and risk characteristics of each applicant and make coverage and premium decisions based on that information. Although both group and individual market health insurance plans generally include greater restrictions on mental health benefits than on benefits for other services, these restrictions are usually greater among individual market plans. Where not precluded by law, restrictions on mental health benefits can include (1) lower annual or lifetime dollar limits on what the plan will pay, (2) lower service limits, such as fewer covered hospital days or outpatient office visits, and (3) higher cost sharing, such as deductibles, copayments, or coinsurance. A typical group or individual health plan, in the absence of a requirement that mental health benefits and other benefits be equal, might cover unlimited hospital days and outpatient visits, pay 80 percent of covered services, and impose a lifetime limit of $1 million for other benefits. However, for mental health benefits, a typical group plan might cover only 30 hospital days and 20 outpatient visits per year, pay only 50 percent of covered services, and impose a $50,000 lifetime limit. Among individual market plans, if offered coverage, an individual may typically face even greater restrictions on mental health benefits, such as a lifetime dollar limit of $10,000 or an annual dollar limit of $3,500. Moreover, some individual market carriers may offer no benefits for outpatient care, such as visits to a mental health professional; may offer mental health benefits only under a separate policy at an increased cost; or may not offer any benefits for mental health treatment. Federal and state laws have begun to partially equalize benefit levels, although few of the laws apply to individual market plans. The Mental Health Parity Act of 1996 prohibited certain group plans from imposing annual or lifetime dollar limits on mental health benefits that are more restrictive than those imposed on other benefits, although provisions did not place restrictions on other plan features such as hospital day or outpatient visit limits. The provisions apply only to group plans sponsored by employers with more than 50 employees and do not apply to coverage sold in the individual market. Several states have passed laws that exceed the federal law by requiring that plans not only require parity in dollar limits, but also in service limits and cost sharing provisions. However, most of these state laws apply to group coverage and not individual coverage. As of March 2000, only 10 states required that mental health benefits be on a par with other benefits for all coverage sold in the individual market. In A Minority of States, Individual Market Carriers Guarantee Access to Coverage Access to the individual insurance market for persons with mental disorders or other health conditions depends largely on the insurance laws—and in limited instances, carrier practices—in their states. In 11 states, laws require that individuals with mental disorders or other health conditions be guaranteed access to coverage, regardless of health status. In 8 of the 11 states, all carriers participating in the individual market must guarantee access to at least one product to all applicants. In the remaining 3 states only certain carriers, such as health maintenance organizations (HMO) or Blue Cross and Blue Shield plans, guarantee access to coverage to all applicants. For example, in Michigan, state law requires the Blue Cross and Blue Shield plan to guarantee access to coverage for all applicants, and in Maryland, HMOs are required to have an open enrollment period every 6 months during which all applicants must be accepted regardless of health status. In 9 of the 11 states in which carriers are required to guarantee access to individual market coverage, carriers must also limit the extent to which premium rates vary between healthy and unhealthy applicants and thereby improve the affordability of coverage for high-risk individuals. Rate restrictions generally fall into two categories known as community rating or rate bands. Carriers in 6 of the 9 states use community rating. Under pure community rating, carriers set premiums at the same level for all enrollees, regardless of health status or demographic factors. Under adjusted community rating, limited adjustments are made for certain demographic factors, such as age, gender, or geographic location, but generally not for health status. For example, Maine permits premium rates to vary by no more than 20 percent above or below the standard rate for certain demographic factors, including age. Three of the 9 states require carriers to use rate bands to reduce the variation in premiums. Like adjusted community rating, rate bands permit limited adjustments from a base rate, but typically provide for a greater number of adjustments, including for health status, and a greater degree of variation in premium rates. For example, Idaho allows carriers to vary premiums by up to 25 percent above or below the standard rate for health status. Table 2 indicates the states in which carriers are required guarantee access to coverage and whether they are also required to limit the variation in premium rates. In 6 additional states, certain carriers—typically Blue Cross and Blue Shield plans—voluntarily guarantee access to coverage. In 3 of these 6 states, carriers use community rating to establish premiums. In the states where carriers do not use community rating, premiums for high-risk applicants may be significantly higher than standard rates. For example, several insurance agents in North Carolina said guaranteed access coverage for high-risk applicants in the state can cost several times the standard rate for a healthy applicant, or about $1,000 to $1,200 monthly. (See table 3.) Analysts have written extensively on the trade-offs involved in health insurance regulations intended to improve access to coverage. In general, requirements that carriers accept all applicants and limit the variation in the premiums they charge can result in improved access and affordability for high-risk applicants but may result in higher premiums for healthy applicants, which may lead some to discontinue their health insurance coverage. In Other States, Applicants with Mental Disorders May be More Likely to be Denied Coverage In the 34 states where individual market carriers are not required to guarantee access to coverage, carriers may deny coverage to any high-risk applicant, but may be more likely to deny coverage to those with mental disorders than other chronic health problems. The seven carriers participating in our study that sell individual market coverage in many of these states were more likely to deny coverage for hypothetical applicants with selected mental disorders (52 percent of the time) than for other selected chronic health conditions (30 percent of the time). Some carrier officials said it is more difficult to predict treatment costs for applicants with mental disorders, perhaps contributing to the reluctance of some carriers to offer coverage. However, our analysis of treatment cost variation for selected mental disorders and other chronic health conditions found that both had similarly wide variations in costs. Individuals with Selected Mental Disorders Likely to Incur High Claims and Thus Be Denied Coverage Carriers participating in our study would likely deny coverage to slightly more than half of the applicants currently being treated for one of six selected mental disorders. Generally, where not precluded by state or federal law, carriers may decline coverage to any applicant considered to be high risk. Health care cost and utilization data indicate that individuals with mental disorders, like others with health problems, are likely to incur higher-than-average health care costs. Thus, carriers may deny coverage or, if they offer it, charge a higher premium or restrict benefits, subject to state regulations. We asked the seven responding carriers to assume a hypothetical applicant had a selected mental disorder that had been previously diagnosed, and was of moderate severity and for which the applicant was on prescription medication or had otherwise received medical treatment for the disorder within the prior year. We found that most carriers would likely reject an applicant with posttraumatic stress disorder, schizophrenia, manic depressive and bipolar disorder, and obsessive- compulsive disorder. (See table 4.) Nearly half would likely deny coverage for chronic depression. In most instances in which coverage would likely be offered, applicants would be charged higher premiums and could have benefits limited—such as by permanently excluding coverage for the mental disorder. For example, one carrier would accept for coverage an applicant with chronic depression, but would charge 45 percent above the standard rate. Another carrier would similarly accept an applicant with chronic depression, but would eliminate coverage for treatment of the depression in addition to charging the applicant 40 percent above the standard rate. An applicant or family member with attention deficit disorder would least likely be denied coverage. Only one carrier would likely deny such an applicant outright, and three carriers would likely offer full coverage at the standard rate. The other three carriers would likely offer coverage but charge higher premiums, offer more limited benefits, or both. Carrier underwriting practices can vary considerably. For example, an official from one carrier said that only applicants with serious cases of depression and obsessive-compulsive disorders who are heavily medicated would be declined coverage, while another carrier indicated it would decline any applicant with chronic depression, regardless of severity, if currently under treatment. Officials from two carriers pointed out that declined individuals could reapply and be accepted later if their health problems resolve themselves. One of the carrier officials said an initially declined applicant could be offered coverage under a plan other than the one applied for, although the premiums would likely be higher. Health insurance agents we contacted similarly emphasized the variability of carrier underwriting practices. Published research also illustrates the variation in carrier underwriting practices as they relate to mental disorders. For example, one recent study specifically examined individual market carrier treatment of situational (short-term) depression. The study of carriers in eight localities around the country found that 23 percent would decline an applicant, 62 percent would offer coverage with a premium increase and/or a benefit limit, and 15 percent would offer full coverage at the standard rate. In addition, carriers’ underwriting practices relating to applicants with a history of treatment for mental disorders can vary considerably. Information we obtained during current and prior work examining the individual health insurance market indicates that some carriers may require applicants to be treatment-free for 6 months to 10 years before applications will be considered, depending on the carrier and the prior disorder. For example, the underwriting manual of one multistate carrier indicates that applicants treated for a specified set of mental disorders of moderate severity could be declined if treated within the prior year and either declined or accepted at a higher premium if treated from 1 to 5 years prior to the current application. Another carrier underwriting manual indicates that applicants treated for any neurotic or psychotic disorder would be declined until treatment-free for 2 or 5 years, depending on the nature and severity of the prior disorder. Carriers May Be More Likely to Decline Applicants with Mental Disorders than with Other Chronic Health Conditions To determine whether disparities exist in carrier underwriting practices based on whether an applicant has a mental or other chronic health condition, we compared the seven carriers’ likely underwriting decisions for six mental disorders with 12 other chronic health conditions. Our comparisons show that, although any applicant with a health condition may be declined, most carriers were more likely to decline applicants with one of the selected mental disorders than other selected chronic health conditions—52 percent versus 30 percent, respectively. (See figure 1.) For 52 percent of the 42 underwriting decisions related to applicants with the selected mental disorders, the carriers in our study indicated that they would likely decline the applicants. Only 7 percent of applicants with the selected mental disorders would likely be accepted at the standard premium with standard benefits. The remaining 41 percent would likely be accepted for coverage, but with increased premiums and/or limited benefits. Estimates of premium increases ranged from 20 to 100 percent above the standard rate for a healthy applicant. Benefit restrictions typically involved exclusions of coverage for treatment of the disorder either temporarily—for example, one carrier would likely exclude coverage for 2 to 5 years—or permanently. In comparison, for only 30 percent of the 84 underwriting decisions related to applicants with other selected chronic health conditions would the carriers likely decline the applicants. Similar to applicants with the selected mental disorders who might be accepted for coverage, applicants with other selected chronic health conditions accepted for coverage would also likely face other adverse underwriting actions. In half of the instances, applicants with other selected chronic health conditions would be charged a higher premium, offered more limited benefits, or both. In 20 percent of the instances an applicant would likely be offered full coverage at the standard premium rate. While carriers may be more likely to decline applicants with more costly disorders, in some cases they may also be more likely to decline applicants with mental disorders than applicants with other chronic conditions with similar costs. Figure 2 compares the seven carriers’ likely underwriting decisions related to the selected mental and other chronic health conditions. We grouped the disorders into four cost quartiles to enable comparisons of underwriting decisions for mental and other chronic health conditions that have similar expected health care costs. Cost estimates reflect the average total annual health care costs (including insured and out-of-pocket costs) for individuals with the specified mental disorders or chronic conditions, based on national health care cost and utilization survey data. For example, for the mental disorder and the other chronic health condition in the highest cost quartile, five of the seven carriers would likely decline an applicant with schizophrenia while one would likely decline an applicant with osteoarthritis. Cost Variability Cited as a Key Reason to Deny Coverage to Applicants with Selected Mental Disorders To explain the greater likelihood of denying coverage to applicants with the selected mental disorders, several carrier officials and agents said that costs for treating mental disorders can be subject to greater variability than costs for treating other chronic health conditions, making it more difficult to accurately price for the unknown risk. They cited three factors that may contribute to treatment cost variability and unpredictability. First, they said that diagnosing mental disorders involves greater subjectivity than diagnosing most other health conditions. According to one carrier representative, different clinicians might arrive at different diagnoses for mental disorders, which, in turn, suggest different treatment approaches and thus variable claims costs. Second, several carrier officials and agents said that an individual with a mental disorder is likely to have additional health problems. For example, a carrier official said that someone suffering from depression or an anxiety disorder is also likely to incur claims for the treatment of stomach problems, headaches, or chronic fatigue. Finally, several carrier officials and agents said that certain forms of treatment for mental disorders have a tendency to be overused. For example, an agent said that many individuals become dependent upon and thus overuse expensive outpatient therapy or certain prescription drugs. Representatives from one carrier that generally accepts individuals with mental disorders said that the carrier has found no basis for disproportionately excluding applicants with mental disorders. According to one senior official of this carrier, which has a large pool of individual market enrollees, enrollees with mental disorders are not more likely to suffer from comorbid conditions than those with physical conditions. And while this official agreed with other carrier officials and agents that outpatient therapy has the potential for overuse, he believed that the plan’s cost sharing arrangements and service limits mitigate this tendency without the need for more restrictive underwriting. Regarding the subjectivity in diagnoses and varied treatment approaches, the official said that a majority of mental health treatment involves outpatient therapy, for which costs per visit are relatively predictable, and the number of visits is limited by cost sharing arrangements and service limits. To examine the extent of cost variation associated with the six mental disorders and 12 other chronic health conditions we reviewed, we analyzed national health care cost and utilization data and found that both types of disorders had similarly wide variations in cost. We also analyzed the data to determine whether individuals with the selected mental disorders had a higher number of additional health problems on average than did individuals with the selected other chronic health conditions and did not identify a disparate relationship, that is, both the mental disorders and chronic conditions had similar average numbers of comorbidities— from 3.4 to 6.1 for the mental disorders and from 4.2 to 6.6 for the other chronic conditions. High-Risk Pools Are the Primary Source of Coverage for Applicants with Mental Disorders Who Are Denied Coverage Options available to individuals with mental disorders who are denied coverage in the individual market are limited. For most, state high-risk pools serve as the primary source of coverage. High-risk pool coverage typically costs 125 to 200 percent of standard rates for healthy individuals, and the risk pools’ mental health benefits are generally comparable to those available in the individual market, including more restrictions on mental health benefits than other benefits. In 7 states without guaranteed access laws or risk pools, most applicants denied coverage in the individual market may have very limited or no coverage alternatives. Risk Pools Operate in Most States Where Carriers Medically Underwrite Risk pools operate in 27 of the 34 states where individual market carriers do not guarantee access to coverage for all applicants. A risk pool is typically a state-created, not-for-profit association that offers comprehensive health insurance benefits to high-risk individuals and families who have been or would likely be denied coverage by carriers in the individual market. Premiums for pool coverage are higher than standard insurance coverage for healthy applicants, although not necessarily higher than a high-risk applicant could be charged in the individual market if coverage were available. State laws generally cap risk pool premiums at 125 to 200 percent of comparable commercial coverage standard rates. Health benefits contained in state high-risk pool plans are generally comparable to those available in the individual market; however, benefits for mental disorders or other health conditions are not permanently excluded as they can be in the individual insurance market. Also like private plans, nearly all plans offered by risk pools use features that restrict mental health benefits more than other benefits. For example, see the following. Five pools set significantly lower lifetime dollar maximum limits for mental health benefits ($4,000 to $50,000) than for other benefits ($1 million to unlimited). Eight pools impose more restrictive limits on inpatient mental hospital days (commonly 30 or fewer) than on other inpatient hospital days (often unlimited). Six pools limit mental health outpatient visits to from 15 to 20 annually, and one offers no outpatient benefits, though other outpatient visits are generally unlimited. Five pools reimburse 50 percent for mental health benefits rather than the usual 80 percent for other benefits. Because medical claims costs exceed the premiums collected from enrollees, all risk pools operate at a loss, thus requiring subsidies. States generally subsidize their pools through various funding sources, including surcharges on private health insurance premiums (individual and group) and state general revenue funds. In three recent instances, risk pool applicants have had to wait for coverage to take effect because of funding limits. As of January 2002, risk pool applicants in California and Louisiana had to wait to receive benefits under the pool. In California, applicants must wait about 1 year to receive benefits. In Louisiana, applicants have been waiting since August 2001 for funding to become available. The risk pool in Illinois has had waiting lists in the past because of inadequate funding, most recently from September 2000 through the early summer of 2001. In 7 States, Applicants with Mental Disorders May Have Few or No Coverage Options Available In 7 states without a guaranteed issue requirement or a high-risk pool, applicants with mental disorders or other health conditions who are not eligible for continuation of group coverage or HIPAA portability coverage and who are denied coverage in the individual market may have very limited or no other access options. For example, in Georgia, insurance regulators said that, absent eligibility for a publicly funded program for low-income individuals such as Medicaid, individuals with mental disorders who are denied coverage by private carriers in the individual market have no other available coverage options. Concluding Observations In most states, applicants with any health problems may have difficulty finding affordable coverage in the individual insurance market, and those with mental disorders may face even greater challenges. Because of concern that individuals with mental disorders will incur more variable and less predictable health care costs than individuals with other chronic health conditions, some carriers may be more likely to deny them coverage. However, our analysis of national health care cost data did not identify such a disparity for the selected mental and other chronic disorders we reviewed. If applicants with mental disorders obtain coverage, mental health benefits are typically more restricted than other benefits in most states. Although most applicants who are denied individual market coverage for any health condition may obtain coverage in a state-sponsored high-risk pool, affordability is still an issue, with premiums typically 125 to 200 percent of standard rates in the private market. Moreover, like private coverage, high-risk pools typically restrict mental health benefits more than other benefits. In those few states with neither guaranteed coverage nor high-risk pools, most applicants with mental disorders may have few, if any, options for health insurance coverage. Comments From External Reviewers Representatives of the American Psychiatric Association (APA), the Blue Cross and Blue Shield Association (BCBSA), the Health Insurance Association of America (HIAA), and the National Alliance for the Mentally Ill (NAMI) provided comments on a draft of our report. The APA and NAMI representatives concurred with the report’s findings and conclusions, while BCBSA and HIAA expressed several concerns about some of our findings and conclusions. BCBSA and HIAA commented that coverage is more widely available to applicants with mental disorders in many states than we concluded. For example, HIAA indicated that it would be more appropriate to consider the 27 states with high-risk pools to have guaranteed access to health insurance. We agree that either approach—guaranteed access in the individual insurance market or high-risk pools—can provide applicants with access to health insurance coverage. However, we distinguished those states with carriers that are required or voluntarily agree to guarantee access from states with high-risk pools because there are differences in how individual insurance carriers underwrite in these states. For example, in states with guaranteed access in the individual insurance market, some or all carriers do not deny coverage to applicants with mental disorders and there are often premium restrictions that make coverage more affordable for high-risk applicants. In contrast, in states that do not guarantee access in the individual insurance market, carriers can deny coverage to applicants but the applicants can seek coverage through a high-risk pool. Like plans typically available in the individual market, high-risk pool benefits for mental disorders are often more limited than other benefits, premiums are typically 125 to 200 percent of standard individual insurance rates, and a few states have had waiting lists for eligible high-risk pool participants. As we have noted, only 7 states have neither guaranteed coverage in the individual insurance market nor a high- risk pool program. Further, both BCBSA and HIAA noted that at least some states with requirements that carriers guarantee access to all individuals have had negative unintended consequences, such as average premium increases, some individuals dropping coverage, and some carriers leaving the market. While it was beyond our scope to assess the experience of states that require carriers to accept all applicants and limit premium variation, we have noted that there are trade-offs between increasing access and affordability for high-risk applicants while increasing premiums for healthy applicants and we cite other studies that have further examined these issues. BCBSA and HIAA also indicated that the reports’ findings on the number and percentage of applicants who would be denied coverage are dependent on the mental and other chronic disorders selected for study. For example, BCBSA stated that if different chronic disorders had been selected, such as cancer, heart disease, chronic obstructive pulmonary disease, or human immunodeficiency virus (HIV), the difference in denial rates between applicants with mental disorders and those with other chronic disorders may have disappeared. We agree that our findings are limited to the specific conditions selected and the carriers responding to our requests for information. We did not compare mental disorders to nonmental disorders of a more serious or life-threatening nature—such as those cited by BCBSA—because we did not believe such comparisons would be valid, and previous studies have shown that insurers are likely to deny coverage for applicants with many of these life-threatening conditions. We selected the other chronic conditions based on several criteria to enhance their comparability with mental disorders, in particular that they be of a chronic and manageable nature. We agree that within either the selected mental disorders or other chronic disorders there is a range of clinical severity, expected treatment costs, and insurer underwriting practices. Therefore, we asked the seven carriers to consider that each of the disorders was of moderate severity and that the applicant was taking prescribed drugs or received other medical treatment for the disorder within the past year. BCBSA and HIAA provided other technical comments that we incorporated as appropriate. As we agreed with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days after its date. We will then send copies to other interested congressional committees and members. We will also make copies available to others on request. Please call me at (202) 512-7118 or John Dicken, assistant director, at (202) 512- 7043 if you have any questions. Other major contributors are listed in appendix II. Appendix I: Scope and Methodology To determine the extent to which states require individual market carriers to guarantee access to coverage, we reviewed summary data for all states published by the Commonwealth Fund in collaboration with Mathematica Policy Research, Inc. in August 2001, and the Institute for Health Care Research and Policy, Georgetown University, updated as of June 14, 2000. Although we did not independently verify these data, we did follow up with state insurance regulators in selected instances when we had reason to believe that the summary data were no longer current. We also contacted insurance regulators in 6 states—California, Connecticut, Georgia, Illinois, Mississippi, and Montana—to discuss the implications of state insurance regulation. We selected these states to represent a cross section of states in which carriers are not required to guarantee access to coverage in the individual market. To identify health insurance carrier practices related to coverage and premium decisions, we contacted 25 individual market carriers nationally to request their participation in our study. We also asked the BCBSA and the HIAA to contact some of their members to request participation. Seven carriers that offer HMO, preferred provider organization, or traditional fee- for-service plans across the country agreed to participate. We interviewed or obtained data from these carriers regarding their health plans and underwriting practices. We cannot generalize the practices of these seven carriers to all individual market carriers; however, the seven carriers collectively insure more than 10 percent of all individual market enrollees and sell coverage in most of the states in which carriers are permitted to medically underwrite. We compared the underwriting practices of the seven carriers for selected mental disorders and other chronic health conditions. We selected six mental disorders, each of which affects over 1 million Americans. We selected the other chronic health conditions based on certain clinical characteristics they share in common with mental disorders. Among other criteria, the health conditions selected are generally of a chronic and manageable nature, may require prescription drug therapy, may require care throughout the patient’s life, and may be of intermittent severity. We asked the seven carriers to consider that each of the disorders was of moderate severity and that the applicant was taking prescribed drugs or received other medical treatment for the disorder within the past year. We discussed our approach of comparing mental disorders and other chronic health conditions with mental health experts and an insurer risk management consultant. To ensure that individuals with the mental disorders and chronic health conditions we compared were likely to incur similar health care costs, we analyzed 1997 cost data from the Medical Expenditure Panel Survey, a national survey of health care cost and utilization administered by the Department of Health and Human Services. We calculated the total average annual health care costs incurred by individuals with the selected disorders. These cost data do not provide definitive estimates of the cost of treating specific disorders, however, because the data set aggregated costs for several clinically similar disorders. For example, treatment costs for obsessive-compulsive disorders are aggregated with costs for other related disorders, including hypochondria, panic disorder, and phobic disorders. We also used the data to examine the extent of variation in total health care costs incurred by individuals with the selected mental and other disorders and the extent to which individuals with the selected disorders are likely to have additional health problems. Finally, to examine additional health insurance coverage options available to high-risk individuals, we summarized state high-risk pool program information published in the literature and reviewed alternative coverage options during our interviews with insurance regulators in the 6 states. We also interviewed health insurance agents in the 6 states to discuss their experiences finding coverage for clients with mental disorders. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments Randy DiRosa and Betty Kirksey made key contributions to this report. In addition, Kelli Jones and Kara Sokol provided statistical support. Related GAO Products Mental Health Parity Act: Despite New Federal Standards, Mental Health Benefits Remain Limited. GAO/HEHS-00-95. Washington, D.C.: May 10, 2000. Private Health Insurance: Progress and Challenges in Implementing 1996 Federal Standards. GAO/HEHS-99-100. Washington, D.C.: May 12, 1999. Health Insurance Standards: New Federal Law Creates Challenges for Consumers, Insurers, Regulators. GAO/HEHS-98-67. Washington, D.C.: February 25, 1998. Private Health Insurance: Millions Relying on Individual Market Face Cost and Coverage Trade-Offs. GAO/HEHS-97-8. Washington, D.C.: November 25, 1996.
Five percent of adults suffer from serious mental disorders. Although health insurance carriers in a few states guarantee coverage for mental health treatment, in most states individuals with mental disorders face restrictions in purchasing private health insurance for themselves and their families. Eleven states require carriers to accept all applicants regardless of health status, but coverage options vary. Eight of these 11 states require all carriers to guarantee access to coverage sold in this market. In three states, laws apply only to some carriers, such as Blue Cross and Blue Shield, or certain periods of the year. Carriers in nine of the 11 states are also required to limit the extent to which premium rates vary between healthy and unhealthy individuals. In states without guaranteed coverage in the individual market, the seven carriers GAO reviewed would likely deny coverage more frequently for applicants with mental disorders than for applicants with other chronic health conditions. Specifically, for six mental disorders of generally moderate severity, carriers said that they would likely decline applicants 52 percent of the time. State-sponsored high-risk pools are the primary coverage option available to rejected applicants in most states. In 27 of the 34 states where carriers may deny coverage to applicants with mental disorders or other health conditions, high-risk pools offer coverage to applicants denied individual market coverage. The pools are subsidized--generally through assessments on carriers or state tax revenues--and premium rates are generally capped at 125 to 200 percent of standard rates for healthy individuals. Health benefits available under the pools are generally comparable to those available in the individual market, including similar restrictions on mental health benefits; however, benefits for mental disorders or other health conditions are not permanently excluded as they may be in the individual insurance market.
Background DOD plays a support role in CBRNE consequence management, including providing those capabilities needed to save lives, alleviate hardship or suffering, and minimize property damage caused by the incident. DOD generally provides defense support of civil authorities only when (1) state, local, and other federal resources are overwhelmed or unique military capabilities are required; (2) assistance is requested by the primary federal agency; or (3) NORTHCOM is directed to do so by the President or the Secretary of Defense. DOD has designated U.S. Northern Command (NORTHCOM) to lead the federal military portion of such a support operation in direct support of another federal agency—most often the Federal Emergency Management Agency (FEMA). DOD would be the lead federal agency for CBRNE consequence management or any other civil support mission only if so designated by the President. To be effective, DOD’s efforts must be coordinated with a wide range of federal departments and agencies—including FEMA and the Departments of Health and Human Services and Justice—in order to support 50 states, the District of Columbia, six territories, and hundreds of city and county governments. The National Response Framework establishes the principles that guide all response partners in preparing for and providing a unified national response to disasters. Under the Framework, disaster response is tiered; local government and agencies typically respond immediately after an incident. When additional resources are required, states may provide assistance with their own resources or may request assistance from other states through interstate mutual agreements or the Emergency Management Assistance Compact. Localities and states usually respond within the first several hours of a major incident. The federal government provides assistance to states if they require additional capabilities and request assistance. In the event of a catastrophic incident, such as one involving CBRNE, the framework also calls for federal response partners to anticipate the need for their capabilities before their assistance is requested. The framework lists 15 emergency support functions and designates federal lead agencies in areas such as search and rescue, public health and medical services, and transportation. DOD is a supporting agency for all 15 emergency support functions but is the primary agency only for search and rescue and public works and engineering. Additional tools to guide response efforts are provided by The National Preparedness Guidelines, including National Planning Scenarios, Target Capability and Universal Target Lists, and national priorities. DOD has created significant capabilities that could be used to augment a federal CBRNE response. It also contributes to the organization, training, and equipping of several other state military units focused on consequence management. These include the 22-person National Guard Weapons of Mass Destruction Civil Support Teams that are located in each state and territory); the larger National Guard CBRNE Enhanced Response Force Packages of about 200 soldiers each that are located in 17 states for more expansive response; and the DOD’s CBRNE Consequence Management Response Forces (CCMRF). The Civil Support Teams and CBRNE Emergency Response Force Packages are intended to be part of the state response to an incident and therefore remain under the control of the respective governors, unless they are mobilized into federal service. The CCMRF is intended to be a roughly brigade-sized force (approximately 4,500 troops) that provides the federal military assistance when a CBRNE incident exceeds local and state capabilities—including the Civil Support Teams and CBRNE Enhanced Response Force Packages. The CCMRFs are not whole units by themselves. They are a collection of geographically separated DOD capabilities and units across the military services and consist of such existing specialized capabilities as the U.S. Marine Corps’ Chemical Biological Incident Response Force as well as general capabilities, such as transportation units. Although the CCMRF is intended to be about 4,500 personnel in size, the size of the force that would deploy in support of an actual incident could be modified based on the size of the incident. DOD ultimately plans to have three fully functional CCMRFs. DOD would, if necessary, draw on additional general military forces over and above the CCMRF to provide assistance in the event of one or more major CBRNE incidents. DOD CBRNE Consequence Management Plans and Integration with Other Federal Plans DOD has operational plans for CBRNE consequence management. However, DOD has not integrated its plans with other federal government plans, because the concept and strategic plans associated with the Integrated Planning System mandated by Presidential directive in December 2007 have not been completed. DOD Has Developed Plans for CBRNE Consequence Management Unlike most federal agencies, DOD has had CBRNE consequence management operational plans for over 10 years. DOD, NORTHCOM, and its components have prepared individual plans that address CBRNE consequence management following DOD’s well-established joint operation planning process. This process establishes objectives, assesses threats, identifies capabilities needed to achieve the objectives in a given environment, and ensures that capabilities (and the military forces to deliver those capabilities) are distributed to ensure mission success. Joint operation planning also includes assessing and monitoring the readiness of those units providing the capabilities for the missions they are assigned. DOD and NORTHCOM routinely review and update their plans as part of DOD’s joint planning system. For example, the most recent NORTHCOM CBRNE consequence management plan was completed in October 2008. DOD and NORTHCOM have also developed such planning documents as execute orders that are key to linking immediate action to those plans, as well as scenario-based playbooks to guide the planning, operations, and command and control of military forces for CBRNE efforts. Governmentwide Integrated Planning System Is under Development but Not Yet Complete The Department of Homeland Security (DHS) is leading a governmentwide effort to develop an Integrated Planning System that would link the plans of all federal agencies involved in incident response, including DOD’s; however, this effort is not yet complete. While much in the way of federal guidance has been developed, to be most effective, policy documents must be operationalized by further detailing roles and responsibilities for each entity that may be involved in responding to high-risk or catastrophic incidents. In December 2007, Homeland Security Presidential Directive 8, Annex 1, mandated that the Secretary of Homeland Security, in coordination with the heads of other federal agencies with roles in homeland security, develop an Integrated Planning System to provide common processes for all of the entities developing response plans. The directive also called for the development of strategic plans, concepts of operations plans, and operations plans that would be integrated at the federal, regional, state, and local levels. DHS has grouped the 15 national planning scenarios on which preparedness plans are to be based into 8 scenario sets, of which 5 are CBRNE-related. Each of the scenarios, listed in table 1, includes a description, assumptions, and likely impacts, so that entities at all levels can use them to guide planning. The directive required that the Integrated Planning System be submitted to the President for approval within 2 months of the directive’s issuance in December 2007. As we have reported, the Integrated Planning System was approved in January 2009 by former President Bush, but is currently under review by the new administration, and no time frame for its publication has been announced. The approval of the CBRNE plans required under the directive (see table 2 below) would be a step toward unifying and integrating the nation’s planning efforts. For example, for each National Planning Scenario, a strategic guidance statement is intended to establish the nation’s strategic priorities and national objectives and to describe an envisioned end-state. Strategic guidance statements will have corresponding strategic plans, which are intended to define roles, authorities, responsibilities, and mission-essential tasks. Under each strategic plan, a concept of operations plan will be developed, and federal agencies are further required to develop operations plans to execute their roles and responsibilities under the concept of operations plan. As of today, strategic guidance statements have been approved for all 5 CBRNE-related scenario sets. Four of the 5 required strategic plans have also been completed. The remaining strategic plan (chemical attack) was begun in June 2009 upon the approval of the strategic guidance statement for that scenario. One of the 5 required overall federal concept plans—that for terrorist use of explosives attack—has been completed. As we have previously reported, apart from the sequential timelines required in HSPD Annex 1, FEMA and DHS have no schedule or project plan for completing the guidance and plans. Table 2 shows the status of federal CBRNE strategy and plans called for under HSPD 8 Annex 1. DOD’s plans and those of other federal and state entities cannot be fully integrated until the supporting strategic and concept plans are completed. Current Capability Assessments at Local, State, and Federal Levels May Provide Insufficient Data for DOD to Shape Its Response to CBRNE Incidents A number of efforts to develop capability assessments are under way at local, state, and federal levels, but these efforts may not yet be sufficiently mature to provide DOD with complete data that it can use to shape its response plans for CBRNE-related incidents. For example, FEMA has begun to catalog state capabilities in its preparedness reports and is working on a capability gap analysis. However, DHS faces challenges in developing its approach to assessing capabilities and preparedness. As noted in DHS’s January 2009 Federal Preparedness Report, several key components of the national preparedness system are still works in progress, and not all data required for the federal government to assess its preparedness are available. We have previously reported that state capability data developed by individual states cannot be used to determine capability gaps across states, because the states do not use common metrics to assess capabilities and do not always have the data available that they need to complete their reports. In addition, according to DOD and FEMA, even to the extent that these data are available, states may limit their sharing of sensitive information on capability gaps with DOD entities responsible for developing DOD’s plans and related capabilities. DOD’s Planned Response to CBRNE Incidents DOD has had plans to provide CBRNE consequence management support to civil authorities since before 9/11 and in the last few years has set higher goals in the expectation of being able to provide expanded capabilities through its 3 CCMRFs. However, its ability to respond effectively may be compromised because (1) its planned response times may not meet the requirements of a particular incident, (2) it may lack sufficient capacity in some key capabilities, and (3) it faces challenges in adhering to its strategy for sourcing the CCMRFs with available units. DOD’s Planned Response Times May Be Too Long In 2005, DOD established a standard for itself that called for the ability to respond to multiple, simultaneous catastrophic incidents, and it initiated efforts to create 3 CCMRFs. For the first 3 years, DOD did not regularly assign units to the CCMRF mission, and this decreased DOD’s ability to actually field any of the CCMRFs within the timelines it had established. In October 2008 DOD sourced the first CCMRF, primarily with active force units. A second CCMRF, comprised primarily of reserve units, will assume the mission in October 2009 and a third in October 2010. In the absence of national guidance suggesting what level of response capability DOD should have available within a specified time frame, DOD’s plans use a phased deployment to allow the CCMRF to be able to provide consequence management support to civilian authorities within 48-96 hours of being notified of an CBRNE incident. The earlier phases of the deployment will provide the lifesaving capabilities. However, multiple DOD estimates for some of the more catastrophic scenarios, such as a nuclear detonation, have identified significant gaps between the time certain life saving and other capabilities would be needed and DOD’s planned response times. For example, victims of a nuclear attack would require decontamination, which medical experts have established must be provided within as soon as possible after exposure. If DOD adheres to its planned response times in such a scenario, the capabilities of early responders such as local police and fire departments would likely be overwhelmed before DOD arrived at the incident site. NORTHCOM’s assessment and other DOD estimates demonstrated that, for a number of capabilities, DOD’s response would not be timely. Table 3 shows one estimate of the potential shortfall in decontamination capabilities that could result. The NORTHCOM capability-based assessment similarly suggests that without a national, risk-based determination of DOD’s share of the federal capability requirements, DOD will be unable to determine whether its planned response times should be adjusted. DOD’s Planned Force May Lack Sufficient Capacity in Some Key Capabilities Needed for Catastrophic Incidents In addition to timeliness issues, DOD’s planned force has limited quantities of some of the needed life saving capabilities, such as medical and decontamination services. For example, some nuclear detonation scenarios project that hundreds of thousands could be killed, injured, displaced, contaminated, or in need of medical care. The CCMRF would be able to provide only a small portion of the necessary capability. Although a CCMRF is estimated, under optimal circumstances, to be capable of decontaminating several thousand people per day, some estimates project that the gap between needed decontamination capabilities and what local, state, and other entities could provide would be tens of thousands. DOD recognizes that it may need additional units to augment the CCMRF, and it has made some tentative estimates. However, DOD has not developed contingency plans designating specific units to augment the CCMRF. Unless these units are identified in advance and trained for the mission, they may be unable to deploy rapidly. Without clear plans aligning CCMRF objectives with the projected need for response capabilities and clearly delineating national expectations for timely response, neither DOD nor other entities involved in incident response can be certain that the CCMRFs will be able to respond adequately to mitigate the consequences of a catastrophic CBRNE incident. DOD Faces Challenges in Adhering to Its Strategy for Sourcing the CCMRFS with Available Units In sourcing its 3 CCMRFs, DOD has encountered challenges in implementing an approach that could enhance unit availability and training and readiness oversight for forces that are not assigned to NORTHCOM. DOD originally intended the CCMRF to be comprised entirely of federal active military forces, but the two follow-on CCMRFs will be sourced with large numbers of National Guard and Army Reserve units. The demands of ongoing overseas operations have led DOD to draw more and more heavily on Guard and Reserve forces to fulfill civil support functions. Because National Guard units have responsibilities in their respective states, a competition for resources issue may arise between DOD and the states. For example, while governors may need the same capabilities within the state or to support mutual assistance agreements with other states as would be needed to support a CCMRF, there is no clear understanding between the governors and DOD to ensure that these units will be available if they are needed for a federal mission. Moreover, elements from a single unit can be spread over many states, further complicating the task of coordinating between DOD and each of the states. For example, one Army National Guard aviation company belonging to the CCMRF has elements in Arkansas, Florida, and Alabama. Three different states would be required to make these elements available to form the company. The potential rapid deployment mission of the CCMRF makes it imperative that specific agreements be reached. However, the agreements that have been reached to date are general in nature and do not specify how states are to ensure that Guard units will be available for a CCMRF deployment. Similar issues arise with the Army Reserve. The training demands of the CCMRF mission have caused DOD to authorize additional training days, but according to Army Reserve officials, reservists cannot be compelled to attend training events beyond their annual training requirement. They stated that, as a result, units must rely on the voluntary participation of their personnel for training beyond the requirement, which reduces their assurance that these personnel will be available for other necessary CCMRF training. For example, one reserve company was unable to fulfill all aspects of its mission requirements because of low participation at a training event. Unit officials stated that some of the unit’s members had school or work obligations that conflicted with this training. Moreover, reserve unit officials stated that, unlike active unit officials, they cannot restrict the personal travel of unit members to ensure that they will be available if they are needed to support an unexpected federal CBRNE incident response. These challenges to sourcing the CCMRF increase the risk that DOD’s ability to effectively respond to one or more major domestic CBRNE incidents will be compromised. That risk can be mitigated by plans that integrate the active and reserve component portions of the CCMRF and agreements between DOD and the states on the availability of National Guard units and the duty status under which they would respond to a major incident requiring federal forces. DOD’s decision to change its approach to how NORTHCOM will routinely interact with units designated for the CCMRF will present additional challenges. In 2008, DOD’s sourcing approach was to assign the first CCMRF (primarily active forces) to NORTHCOM and allocate the remaining two CCMRFs (mix of Guard and Army Reserve) to NORTHCOM. Beginning in October 2009, DOD will allocate the units from all three CCMRFs to NORTHCOM, rather than assigning them to the NORTHCOM commander outright. As a result, despite the fact that NORTHCOM’s commander is responsible for commanding the federal military domestic CBRNE response in the continental United States, NORTHCOM will have no CBRNE forces under its direct control. There are advantages to assigning forces directly to NORTHCOM. For example, the command would have direct authority over the units’ day-to-day activities, including training and exercise schedules, and would be better able to monitor readiness. Additionally, there would be fewer administrative steps required for the NORTHCOM commander to activate and deploy the CCMRF in the event of an incident. This would be crucial for deploying the critical initial response elements of the overall force. Under allocation, while DOD’s current approach would provide NORTHCOM with authority over units while they are participating in scheduled NORTHCOM training events, NORTHCOM would have to coordinate with multiple commands to obtain participation from these units. Current guidance states that other commands should make their units available for scheduled NORTHCOM exercises “to the greatest extent possible.” However, NORTHCOM cannot always be assured that units will be available for these exercises. In addition, NORTHCOM remains uncertain about the extent to which it will have oversight of CCMRF units’ day-to-day training activities and be able to confirm that these units are ready to perform their mission even when they are under the authority of another command. DOD Actions on CCMRF Readiness and Training and the Impact of Current Deployments DOD has taken a number of actions in the past year to improve the readiness of its CCMRF units. However, our ongoing work shows that the CCMRF may be limited in its ability to successfully conduct consequence management operations because (1) it does not conduct realistic full force field training to confirm units’ readiness to assume the mission or to deploy rapidly, and (2) conflicting priorities between the CCMRF mission and overseas deployments impact some units’ mission preparation and unit cohesion. DOD Has Taken Actions to Improve CCMRF Readiness The initial assignment of the CCMRF to NORTHCOM in October 2008 and the increased priority DOD has placed on the CBRNE mission have resulted in a number of improvements in unit preparation for the first fielded CCMRF. The Army, in coordination with NORTHCOM and its subordinate commands, has established guidance for both individual and collective training—including joint mission essential task lists—for units designated for the CCMRF. Therefore, for the first time, identified units are conducting individual and collective training focused on the CCMRF mission. For example, key leaders such as brigade task force headquarters personnel and battalion commanders are required to participate in a number of command and control training events to provide them with an understanding of how to organize and conduct operations in a complex interagency environment under catastrophic disaster conditions. Moreover, the increased priority given to the mission in the spring of 2008 has led to units receiving personnel and equipment before they assume the mission and ahead of many other units that do not participate in the CBRNE mission. Extent of Realistic Field Training Impacts CCMRF’s Ability to Perform Effectively Despite units being certified as ready prior to assuming the mission in October 2008, it is unclear whether the CCMRF can effectively perform CBRNE consequence management operations throughout the 1-year mission period to which it is assigned, because the readiness of the entire CCMRF is not confirmed through a realistic field training exercise before the force assumes the mission, nor have its rapid deployment capabilities been fully assessed. Before designated units assume the CBRNE mission, they must be certified by the military services to be trained to perform that mission. However, there is no requirement to provide these units with a full force tactical field training exercise. While units conduct this type of training prior to an overseas deployment, and NORTHCOM and Joint Force Land Component Command (JFLCC) training officials have discussed the desirability of such an exercise, the first CCMRF units have not received this kind of training. Although some CCMRF units have participated in joint field exercises, critical units often did not participate. In addition, the exercises were conducted several months after units had been certified as trained to perform the mission. Units also must demonstrate that they will be able to meet the required response times once they assume the mission. A key aspect of the CCMRF mission is to be able to rapidly deploy each of the three force packages that comprise each CCMRF within a specified response time. One of the primary challenges to a timely response is that CCMRF packages may have to deploy rapidly from their home stations. Deployment readiness exercises are important, because they test units’ abilities to ascertain how quickly staff can be notified and assembled, equipment prepared and loaded, and both staff and equipment moved to the designated point of departure. DOD has provided general guidance that supported commands, such as NORTHCOM, should verify the ability of CCMRF units to activate and deploy. However, DOD has not yet conducted deployment exercises for the entire CCMRF, and it is not clear if its plans for future CCMRFs will include such exercises. In the absence of such exercises, NORTHCOM and DOD will continue to be unable to verify the ability of CCMRF units to deploy. Units’ Preparation for the CCMRF Mission and Efforts to Achieve Unit Cohesion Are Impacted by Other Missions The demands that overseas missions are placing on the Army also may put the effectiveness of the CCMRF mission at risk. While DOD has identified CCMRF as a high priority mission, competing demands associated with follow-on missions may distract from a unit’s focus on the domestic mission. For example, Army units are frequently given the CCMRF mission when they return from an overseas deployment. Because these units are at the beginning of the “reset” phase of the Army Force Generation (ARFORGEN) cycle, they often lack personnel and equipment. Although the Army attempts to accelerate the fill of personnel and equipment to these units, some units may not have received their personnel and equipment in sufficient time to allow them to meet all of the requirements of the CBRNE mission before they assume it. These training and force rotation issues have prevented DOD from providing the kind of stability to the force that would allow units to build cohesiveness. While DOD’s goal has been to assign units for at least 12 months and to set standard start and end dates for each rotation, several critical units have been unable to complete their 1-year CCMRF rotations for fiscal year 2009. As a result, the replacement units who have finished out these rotations have missed important training. For example, the headquarters units for the aviation and medical task forces rotated out of the mission after only 4 and 6 months, respectively, because of competing priorities. Because key leaders from units of the entire force attend a mission rehearsal exercise prior to mission assumption, the replacement of these units after only a few months negated much of the value that was gained from these three task forces working together and precluded the replacement task force leaders from having the same opportunity. CCMRF Requirements Development, Funding, and Oversight DOD is making progress in identifying and providing funding and equipment to meet CCMRF mission requirements; however, its efforts to identify total program requirements have not been completed, and its approach to providing program funding has been fragmented, because funding responsibilities for CCMRF-related costs are dispersed throughout DOD and are not subject to central oversight. CCMRF Mission Requirements Have Not Been Fully Developed The units initially designated for the CCMRF mission did not have fully developed funding and equipment requirements. In addition, the recent NORTHCOM Homeland Defense and Civil Support Capabilities-Based Assessment highlighted a number of systemic capability gaps that need to be addressed and may generate additional funding requirements. Moreover, other important requirements for this mission have not been identified and funded. The Joint Forces Land Component Commander (U.S. Army North—ARNORTH) and the Joint Task Force Civil Support are responsible for developing and approving service-specific equipment unique to the CCMRF’s Joint Mission Essential Tasks. However, to date, mission essential equipment requirements have not been fully developed. While some equipment requirement lists have been developed and are being reviewed by NORTHCOM, equipping officials said that lists have not been developed for non-standard equipment that units may need in order to support civil authorities in a CBRNE environment. As a result, some fiscal year 2008 units have determined requirements based on their own independent mission analyses. Unit officials stated that filling some of the needs they identified—such as the need for non-standard communications equipment that is compatible with civilian equipment—was difficult because the units lacked a documented requirement for their planned acquisition. In addition, the review process did not always include the command organizations that are responsible for the mission. Thus, decisions on what to buy and in what quantity were not consistently vetted to ensure standardization in equipping various units. ARNORTH officials stated that they were in the process of developing mission essential equipment lists and hope to have them completed in time for the next rotation, which begins in October 2009. Extent of Dedicated Funds for Some CCMRF Training Impacts Mission In the spring of 2008, sourcing priority for the CCMRF mission increased substantially within the department, and funding was provided for specific aspects of the mission. For example, funding was provided for NORTHCOM’s training program—which totals more than $21 million annually—for three major exercises associated with the CCMRFs for fiscal year 2010 and beyond, and the Army Reserve has planned funds of more than $37 million for fiscal years 2009 and 2010 to support additional full- time personnel and training days that have been authorized to support the CCMRF mission. In addition, while the military services have not planned funds for equipment specifically for the CCMRF mission, equipment has been purchased with funds left over from past Global War on Terrorism deployments. In other cases, purchase requests for certain equipment were denied by administrative parent commands because, unit officials believed, the equipment was considered non-critical by reviewing officials. Moreover, units must fund their CCMRF training activities from their operations and maintenance accounts, which were developed and approved months before units knew they would be assigned to the CCMRF. According to unit officials, because they do not have dedicated funds for CCMRF in their budgets, they sometimes must take money from other sources to meet what they believe are their highest priorities for the CCMRF mission. Also according to these officials, while the lack of planned funds for the CCMRF has been mitigated to some extent by the mission’s high priority level, they have found it necessary to curtail or cancel some desirable training because funding was unavailable. Army officials told us that if funding shortfalls develop because units lack sufficient funds to conduct both CCMRF and follow-on mission training, units can request additional funds from the Army. However, unless units assess their total funding requirement for the CCMRF and their other designated mission and receive funding based on both missions, CCMRF units may be at risk of not having enough funding to conduct all of their CCMRF training. This, in turn, puts units at risk of not being fully prepared if they are needed to respond to an incident. CCMRF units may face more acute funding issues as the United States begins drawing down in Iraq and as military supplemental funding, such as funding for Global War on Terrorism, is reduced. Because DOD has assigned funding responsibilities across the department and because much of the funding for the CCMRF is coming from existing operations and maintenance accounts, DOD lacks visibility across the department over the total funding requirements for this mission. Without an overarching approach to developing requirements and providing funding, and a centralized focal point to ensure that all requirements have been identified and fully funded, DOD’s ability to carry out this high-priority homeland security mission in an efficient and effective manner is at risk. Agency Comments We provided the Departments of Defense and of Homeland Security an extensive briefing on our preliminary findings. We also provided them a draft of this statement. Neither DOD nor DHS had formal comments, but both provided technical comments, which we incorporated into the statement, as appropriate. We plan to provide this subcommittee and our other congressional requesters with our final report on DOD’s CBRNE consequence management efforts in September 2009. We expect to make a number of recommendations for DOD action at that time. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee might have. Contacts and Acknowledgements For questions about this statement, please contact me at (202) 512-5431 or daogostinod@gao.gov. Individuals who made key contributions to this testimony include Joseph Kirschbaum, Assistant Director; Rodell Anderson; Joanne Landesman; Robert Poetta; and Jason Porter. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD plays a support role in managing Chemical, Biological, Radiological, Nuclear, and High-Yield Explosives (CBRNE) incidents, including providing capabilities needed to save lives, alleviate hardship or suffering, and minimize property damage. This testimony addresses GAO's preliminary observations on DOD's role in CBRNE consequence management efforts and addresses the extent to which (1) DOD's plans and capabilities are integrated with other federal government plans, (2) DOD has planned for and structured its force to provide CBRNE consequence management assistance, (3) DOD's CBRNE Consequence Management Response Forces (CCMRF) are prepared to perform their mission; and (4) DOD has funding plans for the CCMRF that are linked to requirements for specialized CBRNE capabilities. GAO reviewed DOD's plans for CBRNE consequence management and documents from the Department of Homeland Security (DHS) and the Federal Emergency Management Agency. GAO also met with officials from the Undersecretary of Defense for Homeland Defense, U.S Northern Command, U.S. Army Forces Command, U.S. Army North, the National Guard Bureau, and some CCMRF units. DOD has its own CBRNE consequence management plans but has not integrated them with other federal government plans because all elements of the Integrated Planning System mandated by Presidential directive in December 2007 have not been completed. The system is to develop and link planning documents at the federal, state, and local levels. While the system's framework is established, the CBRNE concept and strategic plans that provide further guidance are incomplete. DOD has had operational plans in place and revises these plans regularly. However, until the Integrated Planning System and its associated plans are complete, DOD's plans and those of other federal and state entities will not be integrated, and it will remain unclear whether DOD's CCMRF will address potential gaps in capabilities. With a goal to respond to multiple, near-simultaneous, catastrophic CBRNE incidents, DOD has plans to provide the needed capabilities, but its planned response times may not meet incident requirements, it may lack sufficient capacity in some capabilities, and it faces challenges to its strategy for sourcing all three CCMRFs with available units. Without assigned units and plans that integrate the active and reserve portions of the CCMRF, and agreements between DOD and the states on the availability of National Guard units and the duty status in which they would respond to an incident requiring federal forces, DOD's ability to train and deploy forces in a timely manner to assist civil authorities to respond to multiple CBRNE incidents is at risk. DOD has taken a number of actions in the past year to improve the readiness of units assigned to the CCMRF, increasing both individual and collective training focused on the mission and identifying the mission as high priority. However, the CCMRF has not conducted realistic full force field training to confirm units' readiness to assume the mission or to deploy rapidly. Competing demands of overseas missions may distract from a unit's focus on the domestic mission, and some CCMRF units rotate more frequently than stated goals. These training and force rotation problems have prevented DOD from providing the kind of stability to the force that would allow units to build cohesiveness. DOD is making progress in identifying and providing funding and equipment to meet CCMRF mission requirements; however, its efforts to identify total program requirements have not been completed, and funding responsibilities have been assigned across the department and are not subject to central oversight. When the CCMRF mission priority increased in the spring of 2008, more funding was provided. However, units did not have dedicated funding and thus purchased equipment with existing funding which is also used for other missions. DOD lacks visibility over the mission's total funding requirements. Without an overarching approach to developing requirements and providing funding and a centralized focal point to ensure that all requirements have been identified and funded, DOD's ability to ensure that its forces are prepared to carry out this high priority mission remains challenged.
Background In 1990, legislation took effect that required SNFs and other nursing facilities to assess and provide for their residents’ needs for therapy services. Subsequently, the number of rehabilitation agencies providing physical, occupational, and speech therapy increased dramatically—as did Medicare spending on therapy services delivered in nursing homes. Most nursing home residents receive their therapy services from outpatient rehabilitation therapy companies (OPT) that send their employees to the nursing home. Once approved to participate in Medicare, either a SNF or an OPT can bill Medicare. Both are regarded by Medicare as institutional providers and, as billing providers, are reimbursed on a cost basis. SNFs and OPTs must file annual reports detailing the actual costs of services that were delivered to Medicare beneficiaries and billed to the program throughout the preceding year. HCFA claims-processing contractors reconcile these actual reported costs with the interim payments made to providers throughout the year, either by making additional payments to providers or by collecting overpayments from them. In the reconciliation process, the contractor determines whether the claimed costs are “reasonable.” Medicare has a number of principles to ensure that only reasonable costs are used in determining its final payment, such as a prudent purchaser principle. Under this principle, Medicare compares claimed costs with the “going rate” for the item or service. For example, if a SNF contracts with an OPT at $500 for 1 hour of therapy services and the going rate is $100 per hour, only $100 would be approved for inclusion in the cost report during the year-end settlement; the $400 would be disallowed. Two problems occur in applying the prudent purchaser and other principles. First, to ensure that these rules are followed, Medicare’s claims-processing contractors must audit cost reports. But because auditing is resource intensive and funds for auditing are limited, therapy costs are rarely audited. Second, establishing a “going rate” may involve conducting a survey of current practice and pricing among comparable providers in the same geographic area, which may also be resource intensive. For these reasons, if the SNF in the example above included the full $500 per hour contract cost in its cost report, there is little assurance that this amount would be adjusted downward. Under certain billing conditions, salary guidelines have helped Medicare limit the amount it will pay for certain services. For example, reimbursements for OPT-provided physical therapy services that are billed by SNFs and subject to salary equivalency guidelines (and, hence, are referred to as capped) increased 646 percent between 1989 and 1995. Speech therapy and occupational therapy reimbursements, for which similar guidelines have not been imposed (referred to as uncapped), have grown at about two and three times this rate, respectively, as shown in figure 1. The absence of information on the amount of time spent providing a service also makes it difficult to determine whether a claim is reasonable prior to payment. Under current HCFA regulations, OPTs and SNFs are not required to specify on their claims how much therapy time they are billing for or what specific service was provided. Instead, OPTs can choose one of at least six methods to bill Medicare. Although each of these methods requires costs to be associated with a “unit,” claims generally do not specify what the unit denotes. In these cases, the Medicare claims-processing contractor does not know the amount of time involved in delivering the therapy services or what the services included, making it all the more difficult to detect inflated charges and identify unreasonable costs. Limited Progress Made to Control Overbilling for Occupational and Speech Therapy Services SNFs and OPTs are continuing to charge excessively high rates for therapy services, particularly occupational and speech therapy, when services are provided under arrangement. Charges for claims paid during January 1996 by one contractor showed extreme variations similar to those found in our March 1995 study. Because salary guidelines and units of service have not been established, Medicare has no easy way to determine whether any of these charges were excessive or whether they resulted in excessive payments, but that is most likely the case. HCFA Attempts to Implement Salary Guidelines Have Not Yet Been Successful Since 1990, HCFA has taken a number of interim and long-term actions to curb inappropriate charges for SNF therapy services. HCFA has focused its efforts on implementing salary equivalency guidelines for those providing occupational and speech therapy services under arrangement. Between 1990 and 1992, HCFA received numerous complaints about inappropriate charges for therapy services. In 1993, it established a task force to address the problem. As a result of this task force, HCFA sent a series of memorandums in 1993 and 1994 to its claims-processing contractors, advising them of the nature of existing problems—such as inflated therapy service charges—and providing guidance on how to focus review activities. HCFA outlined a number of steps that contractors could take to ensure that services were medically necessary and to help determine whether costs were reasonable and allowable. For example, HCFA suggested that contractors audit any provider with therapy costs exceeding $95 per hour. However, this measure was probably ineffective: “Per hour” rates cannot be determined without intensive auditing because units of service are not defined in units of time. Moreover, some providers that had been billing significantly less than $95 per hour reacted by simply raising their charges close to that level. In 1995, HCFA developed draft salary guidelines for those providing occupational and speech therapy, and revised guidelines for other therapies, reviewing for this purpose 22 separate data sources. The recommended levels were based on the 75th percentile of hospital salaries for therapists (as surveyed by BLS) plus a 5-percent differential to adjust for likely differences between hospital and SNF salaries. In August 1995, a clearance package containing a notice of proposed rulemaking was sent to the Secretary of the Department of Health and Human Services (HHS). Forwarding this package was a significant step, given the complex and lengthy process that HCFA must go through to lower payment rates. At this point, however, officials of the rehabilitation industry complained that the HCFA guidelines were inappropriate and out of date because they were based on 1991 hospital salaries for therapists. Industry representatives offered to commission their own survey of SNFs that employ therapists. HCFA agreed and put implementation of salary guidelines on hold pending completion of the survey. HCFA officials told us that, in their judgment, this would increase the prospects for developing fair and effective guidelines and reduce the chance of a time-consuming and expensive legal challenge. HCFA reviewed the design for this survey, which ultimately encompassed both hospitals and SNFs. The raw survey data and an industry analysis were delivered to HCFA in April 1996. However, the survey response rate was low (10 percent for hospitals and 30 percent for SNFs), which raises questions about how representative the data are. HCFA is conducting its own analysis of the results to determine if they are meaningful. According to HCFA, a proposed regulation should be published shortly after its analysis is complete. The final regulation is then likely to be issued sometime in 1997. HCFA Has Not Defined Units of Service Claims for therapy services are required to specify the number of service units provided; claims-processing contractors’ databases store this information in terms of “units” or “services.” These units, however, are not defined. In our March 1995 report, we recommended that HCFA define units on the basis of time, such as 15-minute intervals. In commenting on that report, HCFA said it did not agree with using time as the basis and believed it would be better to have therapy units defined by the procedures actually performed. HCFA has not yet defined billable therapy units for SNFs in terms of either the exact procedure furnished or the amount of time actually spent with the patient. Therefore, Medicare does not know how much service it is buying at the time it pays the claim. Neither HCFA nor the industry observe any standard usage for “unit” or “service,” as illustrated by the wide variation in “per unit” charges. While some interpret “unit” as 15 minutes, there is no consensus. We analyzed data from five HCFA claims-processing contractors for 1988 through 1993 and found extreme variations in “per unit” charges. For each therapy type, per unit charges for the highest quartile of providers were more than double those of the lowest quartile, and differences among individual providers were even more extreme. Results were similar for our more recent data set. For this follow-on, we also reviewed a range of hourly rates identified in contracts between OPTs and SNFs under the jurisdiction of one HCFA contractor. In those contracts that specified hourly rates for occupational therapy, the rates varied considerably, from $54 to $210 per hour. Prospects Poor for Rapid Resolution of the Problem As we reported in 1995, Medicare has paid substantially more than market rates for some services, which not only increases Medicare costs but also can encourage providers to supply excessive services. We also reported that HCFA has generally been slow in addressing overpricing problems. Delay in drafting and implementing regulatory changes such as price corrections and salary guidelines is inherent in the rulemaking process established by the Administrative Procedures Act as well as in the complexities of intra- and interagency coordination (see fig. 2). For it to fully satisfy formal rulemaking requirements, HCFA has projected that it could take 7 years or longer from the time it learned of the problem, and almost 3 years from the time it started assembling salary data, to establish salary equivalency guidelines for professionals providing therapy services. HCFA, therefore, has drafted proposed legislative language under which salary equivalency guidelines for occupational and speech therapists would be established directly by statute, negating the need for formal rulemaking. The proposal was included in the December 1995 summary of the President’s Medicare proposal. In our September 1995 report, we suggested a similar approach to the effect that the Congress consider allowing the Secretary of HHS to set maximum prices on the basis of market surveys or, if the formal rulemaking process is preserved, allowing the Secretary to make an interim adjustment in fees while the studies and rulemaking take place. HCFA officials consider the establishment of salary guidelines the most urgent step in solving the problem of inappropriate therapy charges. They do not believe it is absolutely necessary to achieve a standard definition of a therapy unit, such as a 15-minute interval, especially since this process would take another 2 or 3 years. Their reasoning is that salary guidelines would provide the necessary link (through the cost report settlement process) between hours of therapy time and costs claimed. Auditors could then confirm that Medicare payments did not exceed salary limits. However, as discussed earlier, such an approach is vulnerable to abuse because few cost reports are audited in sufficient detail to permit such judgments to be made and any audit may be delayed a year or more even when one is performed. Moreover, as long as units are not defined on a time basis as we suggested or on the basis of the exact procedures performed as HCFA believes would be better, Medicare’s claims reviewers—even after salary guidelines are implemented—will not be sure of the amount of services being provided. This in turn makes it more difficult to assess the medical necessity for therapy services. For a long-term solution to the problem of therapy overcharges, HCFA officials emphasized the importance of more systematic legislative approaches, such as requiring unified billing. We presented this option in a report released earlier this year: Unified (or consolidated) billing would require nursing facilities to bill Medicare for all services they are authorized to furnish to patients. OPTs rendering these services would be prohibited from directly billing Medicare; financial liability and medical responsibility would reside with the nursing facility. This would make it easier for Medicare to identify all the services furnished to residents, which in turn would facilitate controlling payments for these services. Conclusion Despite HCFA’s efforts to deal with the problem, SNFs and OPTs continue to bill Medicare high charges for occupational and speech therapy. To correct this problem without expending large amounts of administrative resources, HCFA needs to implement salary equivalency guidelines for occupational and speech therapists as soon as possible. Given HCFA’s experience with payments for physical therapy, such guidelines should help moderate payment growth rates. Legislation to limit reimbursement, as we suggested last September, is the most practical way to enable Medicare to avoid continuing excessive payments for overpriced services. Agency Comments and Our Evaluation In a letter dated June 19, 1996, HCFA generally agreed with our concerns about inappropriate billing and delivery of therapy services. It also agreed with the first recommendation in our March 1995 report that HCFA should develop salary guidelines to establish explicit cost limits on occupational and speech therapy services, though not necessarily with our assessment of the current status of these guidelines. HCFA officials did not entirely agree with our second recommendation, that bills for these services specify the time spent with patients. In its comments on a draft of this report, HCFA claimed that “the emphasis on excessive charges obscures the fact that Medicare is not actually paying the reported charges,” and asked that our report “specifically state that GAO has not identified any specific instances in which excessive charges are actually paid.” As discussed on pages 3 and 4 of this report, the amount Medicare actually pays is not known until long after the service is rendered and the claim processed. Although aggregate payments are eventually determinable, existing databases do not provide actual payment data for any individual claim—hence, our focus on charges. In any case, we found HCFA’s own estimate of a potential $1.4 billion in savings over 7 years as a result of implementing salary guidelines (and revising those already in place) to be persuasive evidence that excessive payments are being made. With regard to the uniform definition of time units, we concur with HCFA that “Ideally, we would prefer reporting that would identify the exact procedure furnished, not just on the basis of time units.” Either way of defining the unit of service for therapy would be better than leaving the unit undefined. More specific claims would make it easier to determine whether charges are appropriate for the actual services provided and whether the patients needed those services. We also concur with HCFA on the importance of systematic approaches, such as unified billing, to resolving concerns over payment for therapy services. As we stated in correspondence last September, unified billing would make it easier for Medicare to identify all the services furnished to a facility’s residents, which in turn would make it easier to control payments for those services. Other HCFA comments were incorporated in the report where appropriate. See the appendix for a copy of HCFA’s comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of HHS, the Administrator of HCFA, interested congressional committees, and other interested parties. We will also make copies available to others on request. Please call Barry Tice, Assistant Director, at (202) 512-7119 if you or your staff have any questions about this report. Other major contributors include Audrey Clayton, Andrea Kamargo, Steve Machlin, and Karen Sloan. Comments From the Health Care Financing Administration The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Health Care Financing Administration's (HCFA) progress in curbing overbilling for occupational speech and physical therapy services. GAO found that: (1) therapy charges billed to Medicare by skilled nursing facilities (SNF) and rehabilitation agencies have more than doubled since 1990; (2) some providers are exploiting weaknesses in Medicare's payment system, since HCFA places no absolute dollar limit on Medicare reimbursements for occupational and speech therapy; (3) HCFA is unable to determine whether a claim is reasonable prior to payment, since SNF are not required to specify how much therapy time they are billing for specific services; (4) HCFA has taken a number of interim actions to curb inappropriate charges, such as implementing salary equivalency guidelines for those SNF providing occupational and speech therapy services; and (5) HCFA officials have emphasized the importance of more systematic legislative approaches, such as unified billing, to solve the problem of therapy overcharges.
Background Within USDA, FSA has the overall administrative responsibility for implementing agricultural programs. FSA is responsible for, among other things, stabilizing farm income, helping farmers conserve environmental resources, and providing credit to new or disadvantaged farmers. FSA’s management structure is highly decentralized; the primary decision-making authority for approving loans and applications for a number of agricultural programs rests in its county and district loan offices. In county offices, for example, committees, made up of local farmers, are responsible for deciding which farmers receive funding for the ACP. Similarly, FSA officials in district loan offices decide which farmers receive direct loans. At the time of our review, the ACP provided funds for conservation projects that, among other things, controlled erosion resulting from planting and harvesting crops and alleviated water quality problems caused by farming, such as the pollution produced by animal waste. The federal government generally paid up to 75 percent of a project’s cost, up to a maximum of $3,500 annually. FSA, in conjunction with other departmental agencies, set national priorities for the program, and FSA allocated funds annually to the states on the basis of these priorities. The states in turn distributed funds to the county committees on the basis of the states’ priorities. Farmers could propose projects at any time during the fiscal year, and the county committees could approve the proposals at any time after the funds became available. Consequently, county committees often obligated their full funding allocation before receiving all proposals for the year. The district loan offices administer the direct loan program, which provides farm ownership and operating loans to individuals who cannot obtain credit elsewhere at reasonable rates and terms. Each district loan office is responsible for one or more counties. The district loan office’s agricultural credit manager is responsible for approving and servicing these loans. FSA accepts a farmer’s loan application documents, reviews and verifies these documents, determines the applicant’s eligibility to participate in the loan program, and evaluates the applicant’s ability to repay the loan. In servicing these loans, FSA assists in developing farm financial plans, collects loan payments, and restructures delinquent debt. For both the ACP and the direct loan program, as well as other programs, farmers may appeal disapproval decisions to USDA’s National Appeals Division (NAD). For the period of our review, about 7 percent of the direct loan appeals to the Division were from minority farmers. In April 1991, we reported that NAD had reversed loan application decisions at comparable rates for minorities and nonminorities. NAD’s database does not separately identify appeals from ACP applicants; we therefore could not obtain this type of data for the ACP. Recently, some minority farmers publicized their concerns that the Department, among other things, takes longer to process the loan applications of minority farmers than of other farmers and has denied debt relief to minority farmers. Subsequently, the Secretary of Agriculture promised to (1) create a civil rights action team to look at the Department’s treatment of minority farmers, as well as other related issues, and (2) hold national and statewide forums on the issue early in 1997. In addition, the Secretary suspended all farm foreclosures and asked the Department’s Office of Inspector General to review the Department’s system for handling discrimination complaints, including the length of time taken to investigate and resolve such complaints. Ongoing Efforts to Enhance Minority Farmers’ Participation in Farm Programs FSA’s efforts to achieve equitable treatment for minority farmers are overseen by the agency’s Civil Rights and Small Business Development Staff. To carry out its responsibilities, the Staff (1) investigates farmers’ complaints of discrimination in program decisions, (2) conducts management evaluations of FSA’s field offices to ensure that procedures designed to protect civil rights are being followed, and (3) provides equal employment opportunity (EEO) and civil rights training to its employees. In addition to these efforts, FSA recently increased its outreach activities to minority farmers to encourage their involvement in the Department’s programs, including their signing of 7-year production flexibility contracts. Civil Rights and Small Business Development Staff FSA’s Civil Rights and Small Business Development Staff is responsible for evaluating the agency’s compliance with civil rights requirements. While we did not evaluate the quality and thoroughness of the Staff’s activities, we noted that a number of efforts are ongoing in this area. The Staff has investigated a number of discrimination complaints filed by farmers. During fiscal years 1995 and 1996, the Staff closed 28 cases in which discrimination was alleged on the basis of race or national origin. In 26 of these cases, the Staff found no discrimination. In the other two cases, the Staff found that FSA employees had discriminated on the basis of race in one case and national origin in the other. USDA has not resolved how it will deal with the employees and compensate the affected farmers. As of January 7, 1997, the Staff had 110 cases of discrimination alleged on the basis of race or national origin under investigation. Ninety-one percent of these cases were filed since January 1, 1995. In addition to investigating individual complaints of discrimination, the Staff periodically evaluates state and county offices’ compliance with EEO and civil rights requirements as part of its routine assessments of these offices’ overall operations. During fiscal years 1995 and 1996, the Staff evaluated management activities within 13 states. None of the evaluations concluded that minority farmers were being treated unfairly. Beginning in 1993, the Staff began to present revised EEO and civil rights training to all FSA state and county employees. About half of the FSA employees have been trained, according to the Staff, and all are scheduled to complete this training by the end of 1997. The training covers such areas as civil rights (program delivery) and EEO counseling, mediation, and complaints. Outreach to Minority Farmers FSA has efforts under way to inform all farmers about USDA’s programs, as well as special efforts to keep minority farmers informed about and enrolled in these programs. To reach all farmers, county offices maintain updated mailing lists and, through periodic newsletters and other announcements, keep all those who own or operate farms in a county informed about new programs and program requirements. In addition to its general outreach activities, FSA has specific efforts to increase minority farmers’ participation in agricultural programs. For example, since September 1993, the Small Farmer Outreach Training and Technical Assistance Program has made grants available to at least 28 entities for outreach and assistance to minority farmers. These entities include such institutions as historically black and Native American colleges and universities. Among other things, grant recipients assist applicants in applying for loans and in developing sound farm management practices. Over 2,500 FSA borrowers have been served by these efforts. FSA has also assisted Native American farmers by establishing satellite offices on reservations. More recently, in July 1996, FSA created an outreach office to increase minority farmers’ knowledge of, and participation in, the Department’s agricultural programs. According to FSA officials, the office is currently identifying the outreach services that FSA already provides to minority farmers and is assessing the need for additional efforts. FSA hired the Federation of Southern Cooperatives to increase the number of minority farmers participating in the 1996 farm bill’s 7-year production flexibility contracts. It has also trained members of the Rural Coalition in the process for electing county committee members. The Rural Coalition consists of several grass-roots groups that provide outreach to minorities in order to increase the number of minorities nominated and elected to county committees. Employment of Minority Staff in County Offices and Representation of Minority Farmers on County Committees In the 101 counties with the highest numbers of minority farmers, representing 34 percent of all minority farmers in the nation, FSA employees and county committee members were often members of a minority group. About one-third of the employees were members of a minority group, and slightly more than one-third of the county committees had at least one minority farmer as a committee member. Minority Employment in County Offices Thirty-two percent of the FSA employees serving the 101 counties with the highest numbers of minority farmers were members of a minority group. Eighty-nine percent of these employees were county executive directors, who manage the operations of FSA’s programs, or program assistants, who, among other things, provide information on programs to farmers. In the offices serving 77 of these counties, at least one staff member was from a minority group, and in 29 of these offices, the executive director was a member of a minority group. In these 101 counties, minority farmers make up about 17 percent of the farmer population. At the time of our visits, 7 of the 10 county and district loan offices included in our review had at least one minority employee. The executive directors of two county offices, Holmes, Mississippi, and Duval, Texas, were members of a minority group, as were the managers of two district loan offices, Elmore, Alabama, and Jim Wells, Texas, and the deputy managers of three district loan offices, Holmes, Jim Wells, and Byron, Georgia. The number of minority employees could change as FSA continues its current reorganization. FSA plans to decrease its field structure staff from 14,683 in fiscal year 1993 to 11,729 in fiscal year 1997—a change of about 20 percent. We do not know how this reduction will affect the number of minority employees in county and district loan offices. Minority Representation on County Committees Until recently, FSA required that in any county in which minority owners and operators accounted for 5 percent or more of those eligible to vote in committee elections, a minority farmer must be placed on the ballot. FSA further required that if these counties did not elect a minority farmer to the county committee, the committee must appoint a minority adviser. As we reported in 1995, minority farm owners and operators, nationwide, accounted for about 5 percent of those eligible to vote for committee members, and about 2 percent of the county committee members came from a minority group. In our current review, we found that for the 101 counties with the highest numbers of minority farmers, 36 had at least one minority farmer on the county committee. In the five county offices we visited, two committees had minority members and the other three had minority advisers. In February 1996, the President issued a memorandum directing federal agencies to apply revised standards to their affirmative action programs to take into account changes that have occurred since the programs were first instituted. As a result, according to the Department, FSA can no longer require that minorities be placed on the county committee ballots in counties where 5 percent or more of the eligible voters are members of a minority group. However, FSA officials informed us that their policy requires state committees to ensure that county committees fairly represent all agricultural producers in their jurisdiction and that, when needed, minority advisers be used to ensure minority representation. Reasons Provided for Disapprovals of ACP and Direct Loan Applications According to FSA’s data, applications for the ACP for fiscal year 1995 and for the direct loan program from October 1994 through March 1996 were disapproved at a higher rate nationwide for minority farmers than for nonminority farmers. To develop an understanding of the reasons for disapprovals, we examined the files for applications submitted under both programs during fiscal years 1995 and 1996 in five county and five district loan offices. We chose these offices because they had higher disapproval rates for minority farmers or because they were located in areas with large concentrations of farmers from minority groups. Reasons for Disapproval of ACP Applications Nationally, during fiscal year 1995, the disapproval rates for applications for ACP funds were 33 percent for minority farmers and 27 percent for nonminority farmers. We found some differences in the disapproval rates for different minority groups. Specifically, 25 percent of the ACP applications from Native American and Asian American farmers were disapproved, while 34 percent and 36 percent of the applications from African American and Hispanic American farmers, respectively, were disapproved. To develop an understanding of the reasons why disapprovals occurred, we examined the ACP applications for fiscal years 1995 and 1996 at five county offices. Table 1 shows the number of ACP applications during this period from minority and nonminority farmers in each of the five counties, as well as the number and percent of applications that were disapproved. When ACP applications were received in the county offices we visited, they were reviewed first for compliance with technical requirements. These requirements included such considerations as whether the site was suitable for the proposed project or practice, whether the practice was still permitted, or whether the erosion rate at the proposed site met the program’s threshold requirements. Following this technical evaluation, if sufficient funds were available, the county committees approved all projects that met the technical evaluation criteria. This occurred for all projects in Dooly County and for a large majority of the projects in Glacier County. In Holmes County, the county committee ranked projects for funding using a computed cost-per-ton of soil saved, usually calculated by the Department’s local office of the Natural Resources Conservation Service. The county committee then funded projects in order of these savings until it had obligated all funds. In the remaining two counties, Russell and Duval, the county committees, following the technical evaluations, did not use any single criterion to decide which projects to fund. For example, according to the county executive director in Russell County, the committee chose to fund several low-cost projects submitted by both minority and nonminority farmers rather than one or two high-cost projects. It also considered, and gave higher priority to, applicants who had been denied funds for eligible projects in previous years. In contrast, the Duval county committee decided to support a variety of farm practices. Therefore, it chose to allocate about 20 percent of its funds to projects that it had ranked as having a medium priority. These projects were proposed by both minority and nonminority farmers. In the aggregate, 98 of 271 applications from minority farmers were disapproved in the five county offices we visited. Thirty-three were disapproved for technical reasons and 62 for lack of funds. FSA could not find the files for the remaining three minority applicants. We found that the applications of nonminority farmers were disapproved for similar reasons. Of the 305 applications for nonminority farmers we reviewed, 106 were disapproved. Fifty-three were disapproved for technical reasons and 52 for lack of funds. FSA could not find the file for the remaining applicant. Approval and disapproval decisions were supported by material in the application files, and the assessment criteria used in each location were applied consistently to applications from minority and nonminority farmers. Reasons for Disapproval of Direct Loan Applications Nationally, the vast majority of all applicants for direct loans have their applications approved. However, the disapproval rate for minority farmers is higher than for nonminority farmers. From October 1994 through March 1996, the disapproval rate was 16 percent for minority farmers and 10 percent for nonminority farmers. We found some differences in the disapproval rates for different minority groups. Specifically, 20 percent of the loan applications from African American farmers, 16 percent from Hispanic American farmers, 11 percent from Native American farmers, and 7 percent from Asian American farmers, were disapproved. To assess the differences in disapproval rates, we examined the direct loan applications for fiscal years 1995 and 1996 at five district loan offices.Table 2 shows the number of applications for direct loans during this period for minority and nonminority farmers in each of the five districts, as well as the number and percent of applications disapproved. Our review of the direct loan program files in these locations showed that FSA’s decisions to approve and disapprove applications appeared to follow USDA’s established criteria. These criteria were applied to the applications of minority and nonminority farmers in a similar fashion and were supported by materials in the files. The process for deciding on loan applications is more uniform for the direct loan program than for the ACP. The district loan office first reviews a direct loan application to determine whether the applicant meets the eligibility criteria, such as being a farmer in the district, having a good credit rating, and demonstrating managerial ability. Farmers who do not demonstrate this ability may take a course, at their own expense, to meet this standard. If the applicant meets these criteria, the loan officer determines whether the farmer meets the requirements for collateral and has sufficient cash flow to repay the loan. These decisions are based on the Farm and Home Plan—the business operations plan for the farmer—prepared by the loan officer with information provided by the farmer. If the collateral requirements and the cash flow are sufficient, the farmer generally receives the loan. In the five district loan offices we visited, 22 of the 115 applications from minority farmers were disapproved. Twenty were disapproved because the applicants had poor credit ratings or inadequate cash flow. One was disapproved because the applicant was overqualified and was referred to a commercial lender. In the last case, the district loan office was unable to locate the loan file because it was apparently misplaced in the departmental reorganization. However, correspondence dealing with this applicant’s appeal to NAD indicates that the application was disapproved because the applicant did not meet the eligibility criterion for recent farming experience. NAD upheld the district loan office’s decision. The Department allows all farmers to appeal adverse program decisions made at the local level through NAD. The division conducts administrative hearings on program decisions made by officers, employees, or committees of FSA and other USDA agencies. The applications of nonminority farmers that we reviewed were disapproved for similar reasons. Of the 144 applications from nonminority farmers we reviewed, 15 were disapproved. Nine were disapproved because of poor credit ratings or inadequate cash flow; five were disapproved because the applicants did not meet eligibility criteria; and one was disapproved because of insufficient collateral. Additionally, in reviewing the 129 approved applications of nonminority farmers, we did not find any that were approved with evidence of poor credit ratings or insufficient cash flow. For the period of our review, we also wanted to obtain information on whether FSA was more likely to foreclose on loans to minority farmers while restructuring or writing down loans to nonminority farmers. We found only one foreclosed loan—for a nonminority farmer—in the five district loan offices we reviewed. We also found 62 cases in which FSA restructured delinquent loans. Twenty-two of these were for minority farmers. Finally, the amount of time FSA takes to process applications from minority and nonminority farmers is about the same. Nationwide, from October 1994 through March 1996, FSA took an average of 86 days to process the applications of nonminority farmers and an average of 88 days to process those of minority farmers. More specifically, for African Americans, FSA took 82 days; for Hispanic Americans and Native Americans, 94 days; and for Asian Americans, 97 days. Agency Comments We provided a draft of this report to USDA for its review and comment. Subsequently, we met with departmental officials—FSA’s Administrator of Farm Services; the Director, Civil Rights and Small Business Development Staff; and the Special Assistant to the Director of NAD—to discuss the information in this report. These officials generally agreed with the information discussed. They provided some clarifying comments that we have incorporated into the report where appropriate. Scope and Methodology To identify FSA’s efforts to achieve equitable treatment of minority farmers in the delivery of program services, we interviewed FSA officials and examined documents concerning the efforts of the Civil Rights and Small Business Development Staff. We did not, however, examine the processes used to investigate complaints of discrimination or the time this office takes to investigate and resolve farmers’ complaints. The Secretary of Agriculture has asked the Department’s Office of Inspector General to examine these issues. We also identified minority staffing and minority representation on county committees in the 101 counties that had the highest numbers of minority farmers. The minority farmer population in these counties represents 34 percent of all minority farmers in the nation, according to the 1992 Census of Agriculture. Finally, we visited five county offices and five district loan offices to examine in detail the treatment of minority farmers at the local level. See appendix I for a detailed discussion of our methodology. We conducted our work from April 1996 through January 10, 1997, in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send a copy of this report to the Secretary of Agriculture. We will also make copies available to others on request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix II. Detailed Methodology From the Civil Rights and Small Business Development Staff, we obtained information on the cases of alleged discrimination closed in fiscal years 1995 and 1996 and obtained copies of and analyzed the Staff’s management evaluations for the same period. Finally, we obtained information on the equal employment opportunity and civil rights training the Staff has provided to the Farm Service Agency’s (FSA) employees. However, we did not evaluate the adequacy of this training. To examine minority staffing in county offices and minority representation on county committees, we first used the 1992 Census of Agriculture to identify the 101 counties with the highest numbers of minority farmers.We then used the Department’s databases to obtain information on the number of minority staff serving the 101 counties and the number of minority farmers on the county committees in each of these counties. We also used the Department’s database to obtain information on the number of minority county executive directors serving the 101 counties. To examine the treatment of minority farmers at the local level, we visited county offices and district loan offices that either had higher disapproval rates for minority than nonminority farmers for the Agricultural Conservation Program (ACP) in fiscal year 1995 and for the direct loan program from October 1994 to March 1996 or had high numbers of minority farmers. Within the five county and five district loan offices, we reviewed the applications for the ACP and the direct loan program for fiscal years 1995 and 1996. For the Glacier, Montana, and Jim Wells, Texas, offices, we reviewed all applications through September 30, 1996. For the other offices, we limited our review to the applications that had been completely processed at the time of our visit. These visits took place between June and October 1996. The offices we visited were located in Alabama, Georgia, Mississippi, Montana, and Texas. At the county and district loan offices we visited, we reviewed 576 ACP and 259 direct loan files for information on the reasons for disapproval and approval in fiscal years 1995 and 1996. We determined whether farmers’ applications were acted upon in accordance with established criteria and whether information to support decisions was contained in the files. We examined nonminority farmers’ applications to determine whether they were approved when similar applications from minority farmers were disapproved. Our review focused exclusively on the documentation contained in the files of the county and district loan offices we visited. We did not directly contact any farmers to discuss any of the information contained in these files or independently verify the information contained in these files. To determine the extent to which direct loan disapprovals are appealed to the National Appeals Division (NAD), we obtained appeals statistics from NAD. NAD’s database does not separately identify the appeals from ACP applicants; we therefore could not obtain this type of data for the ACP. To determine whether minority farmers received equitable treatment nationally, we would have had to visit at least 30 county offices and 30 district loan offices to have statistically valid results. To provide valid estimates, we would have had to select the offices on a statistical basis. This effort would have significantly increased the resources and time necessary to conduct the review. Additionally, whenever a sample from a universe is selected, the estimates made are always subject to error introduced by the sampling procedure. When samples are small, as they would have been in this case, the estimation error tends to be quite large. The only way to decrease the estimation error is to increase the sample size. Therefore, we determined that judgmentally selecting a limited number of offices for case studies represented a more efficient use of our resources. Major Contributors to This Report Robert C. Summers, Assistant Director Fredrick C. Light, Evaluator-in-Charge Jerry Hall Natalie Herzog Paul Pansini Stuart Ryba Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Farm Service Agency's (FSA) efforts to conduct farm programs in an equitable manner, focusing on: (1) FSA efforts to treat minority farmers in the same way as nonminority farmers in delivering program services; (2) minority representation in county office staffing and on county committees in the counties with the highest numbers of minority farmers; and (3) the disposition of minority and nonminority farmers' applications for participation in the Agricultural Conservation Program (ACP) and the direct loan program at the national level and in five county and five district loan offices. GAO found that: (1) FSA's Civil Rights and Small Business Development staff oversees the agency's efforts to achieve equitable treatment for minority farmers; (2) in fiscal years 1995 and 1996, the staff closed 28 complaints of discrimination against farmers on the basis of race or national origin, and found discriminatory practices in 2 of the 28 cases; (3) in addition, as part of its routine assessments of FSA's overall operations in 13 states, the staff assessed the performance of the agency's employees in treating all farmers equitably, but none of the evaluations found that minority farmers were being treated unfairly; (4) the staff has also trained about one-half of FSA'S employees in equal employment opportunity and civil rights matters and expects to finish training all of the employees by the end of 1997; (5) in July 1996, FSA created an outreach office to increase minority farmers' participation in, and knowledge of, the Department's agricultural programs; (6) at the time of GAO's review, 32 percent of FSA's employees serving the 101 counties with the highest numbers of minority farmers were members of a minority group; (7) about 90 percent of these employees were county executive directors or program assistants involved in conducting and managing FSA programs; (8) minority farmers make up about 17 percent of the farmer population in these counties; (9) at the national level, FSA data show that applications for the ACP in fiscal year 1995 and for the direct loan program from October 1994 through March 1996 were disapproved at a higher rate for minority farmers than for nonminority farmers; (10) three of the five county offices GAO visited had higher disapproval rates for minority farmers than for nonminority farmers applying to the ACP, and three of the five district loan offices GAO visited had higher disapproval rates for minority farmers than for other farmers applying for the direct loan program; and (11) GAO's review of the information in the application files in these offices showed that decisions to approve or disapprove applications were supported by information in the files and that decision-making criteria appeared to be applied to minority and nonminority applicants in a similar fashion.
Background The purpose of the Endangered Species Act is to conserve threatened and endangered species and the ecosystems upon which they depend. Currently, there are about 1,300 threatened and endangered species protected under the act and approximately 280 candidate species that may eventually warrant future protection under the act. The Endangered Species Act generally requires that the Secretary of the Interior (or the Secretary of Commerce for species under its jurisdiction) designate critical habitat for protected species—that is, habitat essential to a species’ conservation—and to develop recovery plans that include actions necessary to bring species to the point that they no longer need the act’s protection. The act requires all federal agencies to utilize their authorities, in consultation with the Secretaries of the Interior or Commerce, to carry out programs for the conservation of threatened and endangered species. In addition, where a federal agency action may affect a listed species or its critical habitat, the act requires the agency to consult with the relevant secretary to ensure that the action is not likely to jeopardize the continued existence of any protected species or adversely modify critical habitat. Federal agencies assess the potential effects proposed projects may have on protected species and may modify projects to avoid harmful effects. We have previously reported that these consultations often take longer than the allotted timeframes and frustrate federal agency officials and private parties involved in this process. Protecting habitat is an important component to recovering many threatened and endangered species, as habitat loss is a leading cause of species decline. Habitat destruction and degradation is caused by many factors, and sometimes is the result of land conversion (e.g., for home and road building or commercial development), and logging activities including logging roads and other forest management practices. In some situations, agricultural activities such as diverting water for irrigation purposes, livestock grazing, and applying pesticides and fertilizers, can contribute to habitat destruction or degradation. However, the extent to which such activities impact species and their habitats is a function of many factors, including the nature of the agricultural activity and its proximity to the species. Despite its impact on habitat, agricultural land is nonetheless widely recognized as vital to the protection of the nation’s environment and natural resources. As such, USDA operates approximately 20 conservation programs designed to address a range of environmental concerns—such as soil erosion, surface and ground water quality, loss of wildlife habitat and native species, air quality, and urban sprawl—by compensating landowners for taking certain lands out of agricultural production or employing conservation practices on land in production. USDA has established regulations governing these programs, including eligibility requirements pursuant to authorizing statutes. Depending on the program, decisions about the projects to fund occur at the national, state, or local levels. Table 1 summarizes the six USDA programs included in our review. While the authorizing statutes for each of these programs include measures designed to benefit wildlife and wildlife habitat, WHIP is the only program where authorizing legislation specifically mentions the development of habitat for threatened and endangered species. However, USDA includes protecting habitat for threatened, endangered, and other at-risk species in the national priorities it developed for EQIP and WHIP in 2006. While billions of dollars have been invested in conservation practices through these USDA programs over the years, including actions to benefit wildlife, clear data on the effects of these programs has been relatively limited and many questions remain regarding the conservation impacts of these practices. As a result, USDA is currently engaged in an effort to quantify the environmental benefits of its conservation program practices. This effort, known as the Conservation Effects Assessment Project, began in 2003 and has three primary components: an assessment of national summary estimates of conservation practice benefits and the potential for USDA conservation programs to meet the nation’s environmental and conservation goals, watershed assessments involving basic research on conservation practices in selected watersheds to provide a framework for evaluating and improving performance of national assessment models, and development of bibliographies and literature reviews on conservation programs to document what is known and not known about the environmental benefits of conservation practices and programs for cropland and fish and wildlife. Incentives and Disincentives to Participating in USDA Conservation Programs to Benefit Threatened and Endangered Species, and Suggestions for Addressing Disincentives Survey respondents identified various incentives and disincentives, as well as suggestions to address disincentives, to participating in the six conservation programs we reviewed for the benefit of threatened and endangered species. The most frequently identified incentives were financial benefits, program evaluation criteria that give projects directly addressing threatened and endangered species greater chances of being funded, and landowners’ personal interest in conservation. Financial issues were also identified as a disincentive to participating in these programs, with limited funding available to the programs overall and for individuals specifically, most frequently identified by survey respondents. The other most frequently identified factors limiting participation were fears about federal government regulations, administrative and paperwork requirements, participation and eligibility requirements, and potential limits on current and future uses of the enrolled land. The most frequently identified suggestions for encouraging greater participation were increasing funding, improving education and outreach to landowners, streamlining paperwork requirements, and allowing greater flexibility in program participation and eligibility requirements. Respondents noted that while some of these suggestions may serve to increase participation in the programs, they may not necessarily benefit threatened and endangered species. Incentives for Participating in USDA Conservation Programs to Benefit Threatened and Endangered Species As might be expected, respondents most frequently identified financial benefits as the primary incentive to participating in the six USDA conservation programs we reviewed for the benefit of threatened and endangered species or their habitat. Program evaluation criteria that give projects directly addressing threatened, endangered, or other at-risk species greater chances of being accepted and landowners’ personal interest in conservation were the next most frequently identified incentives. Financial Benefits Survey respondents most frequently identified financial benefits as a primary incentive for a landowner to participate in the conservation programs we reviewed. Several types of financial benefits were identified as encouraging participation, including annual rental payments, cost-share assistance, enhancement and incentive payments, and conservation easement payments. Annual rental payments. Annual rental payments are available to producers enrolled in two of the six USDA programs we reviewed—CRP and GRP. Annual rental payments provide landowners with a guaranteed source of income for their land in exchange for agreeing to participate in multi-year contracts in order to provide sustained conservation benefits. For example, under CRP, FSA provides annual rental payments for 10 to 15 years to participants who convert land in agricultural production to less intensive uses such as establishing grasses and other vegetative covers to, among other things, control soil erosion and enhance wildlife habitat. Cost-share payments. Cost-share assistance is available through each of the six programs we reviewed. In this report we use “cost-share assistance” to mean a payment by USDA for a certain percentage of the cost of implementing an approved conservation practice where the participant and—depending on the program—public agencies, nonprofit organizations or others contribute to the remaining amount. For instance, under EQIP, NRCS may pay up to 75 percent of the costs of implementing conservation practices such as manure management facilities, that are important to improving and maintaining the health of the environment and natural resources. While EQIP may provide cost-share percentages of as much as 75 percent, each NRCS state office may determine its own percentage per conservation practice, within statutory limits. For example, an agency official from Hawaii explained that EQIP participants may receive the 75 percent maximum cost-share allowed in the program for 12 of 51 accepted conservation practices that have been determined to provide the greatest environmental benefits; these 12 practices include some that benefit threatened and endangered species such as fencing out feral animals and planting native trees. The remaining 39 practices are eligible for a 50 percent cost share. WHIP also provides cost-share payments and provides a higher level of cost-share assistance for those participants who enter into 15-year agreements and undertake projects in areas that NRCS has identified as essential habitat for certain species. A respondent from Ohio explained that sharing the cost of implementing conservation practices through WHIP has allowed producers to convert land that was unsuitable for farming to woodlands, which has helped wildlife by reducing land fragmentation in the state. Enhancement and incentive payments. Enhancement and incentive payments are additional types of financial benefits available in CRP, CSP, and EQIP. In general, enhancement and incentive payments provide a participant additional funding—beyond the annual or cost-share payments available in these programs—for implementing practices that can improve a resource condition beyond that which is required for program eligibility. Enhancement payments in some states focus on benefiting targeted species, as determined by USDA state officials or local stakeholders. For example, a NRCS local office in New Mexico—with support from a local EQIP working group and approval by the NRCS state conservationist— offers an annual incentive payment for landowners to defer grazing on enrolled lands that benefit the lesser prairie-chicken, a candidate species for listing under the Endangered Species Act. Similarly, according to an official in Colorado, enhancement payments are geared toward landowners whose projects benefit state-selected species of concern. Easement payments. Landowners can also receive payments by entering into easement agreements with USDA; easement payments can be made to participants in GRP and WRP. An easement under these programs essentially results in the landowner agreeing to how the enrolled land will be managed under the program for the length of the agreement in return for an easement payment. Compared to the temporary duration of the other financial incentives offered by USDA programs, what is most distinctive about easements is the long-term or permanent character of the restriction on future development of enrolled land. Two easement options are available under GRP and WRP—30 years or permanent. According to one respondent, the incentive to pursuing an easement is the long-term certainty that they will be adequately compensated for making habitat improvements. Under WRP, a participant agreeing to a permanent easement may also receive a higher cost-share percentage. Specifically, these participants may receive up to 100 percent of the cost needed to implement projects to enhance or restore wetlands. For these landowners, this combined financial incentive available under WRP—the permanent easement payment and higher than typical cost-share payments—can be helpful for giving them a return on land that is marginally productive. For example, according to an agency official, participating in WRP in Washington allows landowners to be compensated for creating wetlands to benefit salmon species, including some that are threatened and endangered, on agricultural lands where production is limited by high water tables and flooding. Program Criteria That Give Greater Consideration to Projects that Directly Address Threatened and Endangered Species Another most frequently identified incentive for landowner participation for the benefit of threatened and endangered species or their habitat—in all but one of the six USDA conservation programs we reviewed, CSP— was program evaluation criteria that give projects directly addressing threatened, endangered, or other at-risk species greater chances of being approved. These criteria are one of several factors used to evaluate and rank applications for program participation and funding. Respondents explained that there is an incentive to include activities that directly address threatened, endangered, or other at-risk species in applicants’ projects if these activities receive extra ranking points, thereby increasing their likelihood of being accepted and funded by a USDA conservation program. Including criteria for threatened, endangered, and other at-risk species in the ranking process is done primarily by giving more points to projects that address specific species, geographic areas, or habitat types. For example, according to an Oklahoma agency official, the state-level WHIP application ranking process in Oklahoma includes criteria that give more points to projects that develop or restore habitat for the threatened Arkansas River shiner and the lesser prairie-chicken (a candidate species). In Colorado, between 5 and 25 percent of EQIP funds, per a specific watershed area, are spent for projects that address wildlife or enhance riparian and wetland habitat. Such funding has been used to target a state species of concern, the sage grouse, and federally-listed threatened and endangered species such as the Preble’s meadow jumping mouse. In Montana, in addition to providing greater ranking points to WHIP projects that directly benefit threatened and endangered species, NRCS offers EQIP special initiatives that are designed to address natural resource concerns that may not be addressed through traditional EQIP practices or that are determined to be such a critical need that a separate funding opportunity is warranted. Approximately 20 percent of Montana’s EQIP funding is directed toward these special initiatives, some of which directly target creating benefits for threatened, endangered, and other at-risk species, such as the gray wolf and grizzly bear. Eligible applicants who reside in areas that are the focus of the special initiatives, and who are willing to implement specific practices, are likely to receive funding. Landowners’ Personal Interest in Conservation A landowner’s personal interest in conservation was also among the most frequently identified incentives to participate in USDA conservation programs for each of the six programs we reviewed. Many respondents explained that landowners were interested in providing habitat that could support wildlife for both their own personal enjoyment as well as for the general welfare of species, while others articulated a desire to provide safe habitat for threatened and endangered species specifically. This incentive was frequently identified for programs that are specifically geared toward benefiting wildlife, such as WRP and WHIP. Many respondents explained that, for people who are concerned about wildlife, the goals for these two programs themselves were the incentive to participate. Respondents explained that individuals have their own personal or ethical motivations to establish habitat and that according to one respondent, some landowners would do it regardless of program funding. However, as noted by another respondent, with the financial support offered by these programs, the landowner has more resources with which to better establish such habitat and benefit species. Many respondents also identified benefiting wildlife as an important incentive for participating in CRP. For example, one respondent from Georgia explained that while receiving financial assistance was the most important incentive for participating in CRP, the indirect benefit of helping to re-establish an ecosystem that provides a safe environment for certain species was an incentive. Disincentives to Participating in USDA Conservation Programs to Benefit Threatened and Endangered Species Survey respondents most frequently identified limited funding as a primary disincentive to participating for the benefit of threatened and endangered species or their habitat in the six USDA conservation programs we reviewed. Fears about federal government regulations, administrative and paperwork requirements, participation and eligibility requirements, and the potential for current or future agricultural uses to be harmed or restricted were the other most frequently identified factors limiting participation. Limited Funding for Programs and Participants Survey respondents identified limited funding and funding uncertainty for the programs in general, and for the individual payments offered to program participants specifically, most frequently as disincentives for participating in four of the six programs reviewed—CRP, EQIP, GRP, and WHIP. Respondents frequently stated that there was not enough funding available for the programs to accept all eligible applications. Several respondents explained that a lack of program funding can deter applicants, particularly when those with credible, highly-ranked applications do not receive funding. According to one respondent, continuous rejection may result in some landowners choosing to sell their property. The choice to sell portions of property can help make retaining land economically feasible, rather than repeatedly attempting to apply for conservation program funds. Uncertainty about program funding levels can also discourage participation. For example, a respondent from Florida said that it is hard for landowners to plan for conservation if program funding levels are not known from year to year, or if there is uncertainty about whether the program and its objectives will change. In addition to limited funding in general, many respondents identified limited or insufficient financial payments to program participants as a disincentive. According to many respondents, landowners may be hesitant to participate in a conservation program because the cost share provided by the programs is insufficient. For example, one respondent said that funding amounts available for certain conservation practices do not cover the costs associated with implementing the conservation practices, particularly for EQIP and WHIP. Respondents also reported that the financial benefits to implement conservation practices were often not competitive with the financial gain a landowner could realize, for example, by planting a commodity crop or selling their land to a developer. One respondent from Washington said that the profit margins for farmers are so low that having to cover a 50-percent share of a project’s costs is too high, especially if there are no other economic benefits from implementing the conservation practice. Others stated that even a 75-percent cost share may not be enough for some landowners. Fears About Government Regulations Fears about government regulations was among the most frequently cited factors limiting participation in USDA conservation programs for all six of the programs we reviewed. Respondents indicated that landowners fear that participating in a conservation program would expose their operations to greater scrutiny, including potential restrictions under the Endangered Species Act, should they adopt conservation measures that result in creating habitat for a threatened or endangered species on their land. For example, a respondent from Florida noted that landowners considering enrolling in a program may be deterred by the prospect of surveys and assessments for threatened and endangered species on their land. Similarly, landowners are hesitant to take actions that would help the threatened Chiricahua leopard frog, which has adopted livestock watering tanks as a safe habitat because of loss of native habitat, because of concern about potential regulatory impacts under the Endangered Species Act. According to one respondent in Minnesota, some farmers in the state do not take conservation actions under USDA programs that may benefit the prairie fringed orchid—a threatened species—fearing that enrolled lands supporting the orchid may cause the species to grow in adjacent, non-enrolled lands. Respondents also explained that some landowners are generally averse to any government intervention and seek to avoid governmental monitoring, even if they could receive financial or technical assistance in return. Administrative and Paperwork Requirements Burdensome administrative and paperwork requirements was also among the most frequently mentioned factors limiting participation in all six of the programs we reviewed. According to several respondents, the length of time needed to go through the entire process of receiving funds from these conservation programs is long and acts as a disincentive to participating. This process generally includes applying to the program, adopting a conservation practice, and receiving payment. For example, one respondent from Ohio said that it can take almost a year from submitting an application to starting work on the ground. Respondents explained that the timing of the application process is also a concern for landowners. For example, a respondent from Arkansas noted that the EQIP application process starts in the spring when farmers are often busy, typically preparing their lands for planting. If the process started in the winter, it would allow farmers more time to devote to the application process. Respondents also indicated that the sheer volume of paperwork, as well as the degree of personal information required to participate, can overwhelm people and discourage them from applying for the programs. Several respondents indicated that when landowners examine a conservation program’s lengthy contract and its stipulations, they find the process intimidating and do not apply. In addition, some respondents said that they feel that the relatively small amount of money available in the programs is not enough to justify the large amount of paperwork required to apply. One respondent said that filling out all of the forms is particularly burdensome for landowners with smaller farms, and that such landowners cannot afford to spend time tracking down the information for the forms when they instead need to be working on their land. Furthermore, CSP encourages participants to perform self-certification and develop conservation plans. These additional recordkeeping responsibilities can deter potential participants. Some respondents stated that landowners may not have adequate records to prove that they meet the extensive eligibility requirements for a program. Furthermore, some respondents told us that some potential applicants avoid participating because of application requirements to divulge personal information, such as their adjusted gross income, work history, and backgrounds. Finally, according to some survey respondents, obtaining necessary permits to implement conservation practices can slow down an already long process. For instance, one respondent from Washington told us that the permitting process for implementing in-stream projects for threatened and endangered fish is lengthy and inefficient, and may require the involvement of multiple stakeholders, including USDA, FWS, the National Marine Fisheries Service, state departments of fish and wildlife and ecology, as well as county and local permitting agencies. While the issuance and approval of the permits are not the responsibility of USDA, from the applicant’s perspective, these permits add to the burdensome nature of applying for USDA funds. Participation and Eligibility Requirements Also among the most frequently cited disincentives to participating in all of the six programs was that some of the programs’ participation requirements were too restrictive and inflexible. A number of respondents told us that program requirements about what can and cannot be performed in a conservation project are too rigid, and often do not include the very components that are necessary for achieving the intended conservation benefit. For example, limitations on grazing under CRP and GRP were cited by numerous respondents as inflexible. While grazing restrictions were established, in part, to improve ground cover for species such as ground-nesting birds like the lesser prairie-chicken, some respondents contend that the restrictions may actually provide less benefit to some species. An agency official from Oregon explained that the inability to disturb grass stands under 10-year CRP contracts could be counter-productive, because while the undisturbed grass is viable and beneficial for wildlife in the first 5 to 6 years, it will then begin to die out, and could present a fire hazard for the landowner; it is possible that a fire could also result in the destruction of important habitat. This respondent further explained that while ground-nesting species may use the undisturbed grass for protection, allowing grass to grow too tall deters insects and ungulates from using the area and breaking up the sod. Breaking up the sod is critical to maintain healthy grasses. Respondents also told us that landowner eligibility requirements can serve to restrict participation by landowners interested in benefiting threatened and endangered species. For instance, the adjusted gross income requirement for participation renders a number of landowners ineligible, and according to some respondents, these ineligible landowners might have applied if permitted. Respondents noted that the income restriction was a particular problem in areas such as Hawaii, where property income is relatively high, but where many threatened and endangered species could benefit from conservation actions. Several respondents from Hawaii explained that the income requirement excludes potential participants who own a majority of the threatened and endangered species habitat on private property relative to the rest of Hawaii. One respondent told us that he was willing to consider establishing conservation practices that would help protect an endangered plant and other species, but he is ineligible to receive financial assistance to do so because of the adjusted gross income limit. Similarly, respondents expressed concern about CSP’s eligibility requirements that limit participation to selected watersheds. According to one respondent, the number of new watersheds expected to be funded through CSP for fiscal year 2006 was 110, but the number actually funded was 60. This reduction was a result of a lack of available funding. Therefore, some landowners who might be interested in implementing CSP conservation practices may not reside in a watershed eligible for funding. Even when in an eligible watershed, a respondent from Washington said that some landowners may still not be eligible to receive funds because the program uses an inappropriate soil conditioning index criteria to select projects. The criteria used are based on Midwest soil types rather than desert soils such as those found in Washington and other states in the West. A respondent in Illinois noted that CSP also prevents farmers that rent lands for production for short periods of time from participating. The program requires farmers to control enrolled land for the life of the contract. Potential for Participation to Hinder Current or Future Agricultural Production The potential for participation in USDA programs to limit current or future agricultural production was among the most frequently cited disincentives for three of the six programs we reviewed—CRP, EQIP, and WRP. For example, some respondents said that promoting wildlife may result in crop damage, as some animals such as deer or geese may eat crops. Because of this crop damage, some respondents may view such wildlife as pests. Furthermore, a respondent from Pennsylvania described how taking lands out of production can result in noxious weeds invading the area. These weeds are difficult to eradicate and can also spread to and infest other productive lands. Suggestions for Addressing Disincentives to Participating in Programs to Benefit Threatened and Endangered Species Survey respondents most frequently suggested increasing funding, improving education and outreach to landowners, streamlining paperwork requirements, and allowing greater flexibility in program participation and eligibility requirements to address disincentives and encourage greater participation in the six USDA conservation programs we reviewed for the benefit of threatened and endangered species and their habitats. Respondents, however, also noted that while some of these suggestions might increase participation in the programs, they would not necessarily benefit threatened and endangered species. Increasing Funding for Programs and Landowners Increasing funding—for both programs in general and the amounts paid to individual landowners specifically—was the most frequently mentioned suggestion for encouraging participation in USDA’s conservation programs for four of the programs we reviewed—CRP, EQIP, GRP, and WRP; it was the second most frequently identified suggestion for CSP and WHIP. A majority of respondents agreed that increasing the overall investment in the programs could greatly or very greatly help threatened and endangered species. For example, increasing GRP’s budget was mentioned by some respondents as a way to include more applicants in the program, thereby increasing the number of acres enrolled and thus increasing benefits to species that depend on grassland ecosystems. One USDA official explained that if he could pick one program to put additional money into, it would be GRP, in part because of its untapped potential. Similarly, a USDA official in Iowa suggested the need to increase CSP’s overall budget because the program generally only has enough money to fund the highest- ranking applicants and, in Iowa, these tend not to be those landowners who include practices to benefit threatened and endangered species in their applications. According to this official, most of the highest ranking applications are for projects proposed on cropped farmlands, where there is less opportunity to benefit threatened or endangered species. Likewise, respondents suggested increasing WHIP’s budget to allow more high quality applications to receive funding, particularly given that the program’s primary purpose is to benefit wildlife. Respondents also frequently recommended increasing the amount of payments offered to individual program participants. For CRP, respondents specifically suggested increasing the rates of annual rental payments associated with the program since this, in part, would help make setting land aside competitive with other agricultural uses of the land. Further, one USDA official in Massachusetts suggested tailoring the amount of rental payments to specific areas within states and counties in order to better match the payments with local land values. Under EQIP, respondents frequently suggested increasing the cost-share percentage available for projects. Respondents explained that raising the cost-share amount borne by the federal government could help encourage landowners to implement projects that benefit threatened and endangered species since those typically do not provide long-term financial returns. Some respondents recommended putting additional funding into practices that provide direct benefits to threatened and endangered species, such as providing a greater cost-share percentage under EQIP for certain species- friendly practices—as is done, for example, in Hawaii—or raising the rental rate for CRP for those acres that will directly benefit imperiled species. A similar suggestion, made by a respondent in Minnesota, was to provide more funding under GRP to those landowners whose land includes habitat that is essential for threatened and endangered species. Some of the FWS officials we interviewed suggested that USDA could target its funding allocations within programs based on geographic areas determined to be of high priority for threatened, endangered, and other at- risk species. As one soil and water conservation district official in Iowa explained, people would look into helping threatened and endangered species more if they knew they could get money for doing so. Improving Education and Outreach to Landowners Respondents identified improving education and outreach to landowners as a way to encourage greater participation for the benefit of threatened and endangered species most frequently for CSP and WHIP; it was the second most frequently mentioned solution for the other four programs we reviewed. Respondents recommended actions including building trust and developing personal relationships between landowners and agency staff, doing more to advertise the programs, and focusing education on the benefits of helping threatened and endangered species and other wildlife and the specifics on how to accomplish this. One soil and water conservation district official suggested targeting outreach efforts to younger farmers. Some USDA officials we interviewed in Texas noted that, in some areas, agricultural land is starting to change hands to younger farmers and, in particular, to owners who do not depend on agricultural production for income. These officials said that some of these new landowners are more oriented to using their land for recreational purposes and are more amenable to taking steps to help threatened, endangered, and other at-risk species. Respondents indicated that improving education and conducting more outreach to landowners could address a number of different disincentives. First, educating landowners about the regulatory consequences of providing habitat for threatened and endangered species is one way to assuage fears about regulation under the Endangered Species Act. One soil and water conservation district official in Colorado said he reassures people that providing habitat “is a good thing” and that they will not be punished for it; a USDA official in Ohio said the majority of landowners with fears about the act are reassured after learning more about how the law is implemented. A USDA official in Oklahoma explained that NRCS needs to educate landowners so they see at-risk species, like black-tailed prairie dogs, not just as pests, but instead as opportunities for them to benefit from participating in WHIP. Second, one respondent explained that educating people during the application process as to their chances of receiving funding for a competitive program like EQIP can help adjust their expectations and reduce the frustration of not receiving funding. Third, taking the time to educate people about the necessities of some of the paperwork requirements may help them better understand, even though they may still dislike, the bureaucratic process, according to some respondents. For example, a soil and water conservation district official in Oregon suggested the need to explain that paperwork requirements related to threatened and endangered species are often part of a system of checks and balances that are in place for a reason. Finally, one USDA official explained that telling people the reasons why certain conservation practices were developed under WHIP may help overcome some landowners’ perception that the strict requirements regarding how practices are to be installed are a disincentive to participating. Streamlining Paperwork Requirements Streamlining the amount of paperwork associated with the programs was one of the most frequently suggested ways of encouraging greater landowner participation in CSP, EQIP and WRP. Respondents’ suggestions focused on the need to simplify the application and permitting processes. Respondents suggested simplifying the application process by reducing both the volume of paperwork and the processing time for each application. Specifically, a landowner in Missouri suggested creating only one set of paperwork to apply for multiple programs, while a soil and water conservation district official in Washington proposed linking forms so information needs to be entered only once and can be carried forward automatically where needed. Respondents also suggested making the permitting process less time consuming by, for example, allowing Endangered Species Act consultations and other environmental assessments to be performed jointly for more than one project, eliminating the need to do separate assessments for each individual project. Reducing the programs’ paperwork requirements, according to a USDA official in California, would allow NRCS staff to spend more time in the field with landowners instead of processing paperwork in the office. Allowing Greater Flexibility in Participation and Eligibility Requirements More flexibility in participation and eligibility requirements was also among the most frequently mentioned suggestions for encouraging participation in USDA conservation programs under CRP, EQIP, and WRP. For CRP and WRP specifically, respondents frequently mentioned making the programs’ rules governing participation less prescriptive or strict. Respondents indicated that these programs contain restrictions on the amount of agricultural production that can take place on enrolled lands, and that allowing more production could entice landowners to participate, while not significantly detracting from the conservation purposes of the programs. For example, a USDA official in Montana suggested that allowing for some limited grazing in CRP might help persuade landowners who otherwise were turned off by the 10-year minimum length of the required contract. In addition, respondents suggested allowing variable widths for buffers along streams under CRP rather than setting a standard width, and allowing a producer to implement additional management practices beyond what is allowed in their program contract. For example, according to one USDA official, the enhancement program under CRP in Pennsylvania only allows mowing to control weeds during the first three years of a 10-year contract, and that allowing additional mowing each year before or after the mating season for ground-nesting birds would better help these species. For EQIP, respondents frequently suggested allowing greater flexibility in eligibility requirements for potential participants. Respondents recommended allowing landowners who are not agricultural producers— such as hobby farmers or people living on large parcels of land—to qualify for participation in the program; such landowners can receive funds under WHIP. As one soil and water conservation district official explained, it should not matter who owns the land, if the goal is to install projects that benefit threatened and endangered species. Other suggestions included allowing multiple landowners to apply together on one EQIP application, thereby ensuring coordinated management of adjacent lands—an action that would ultimately protect the threatened and endangered species in the area—and creating an exemption to the adjusted gross income requirement for landowners in Hawaii. This potential exemption was suggested because there are so many lands in the state with valuable habitat that are part of large ranches that do not meet the income eligibility requirement. According to one respondent in Hawaii, allowing the large landowners on Maui to participate in USDA conservation programs, for example, would greatly benefit threatened and endangered species. He said that the two largest private landowners alone could help protect several thousand acres of habitat for these species as their land is adjacent to already-protected habitat, including Haleakala National Park. Implementing Suggestions Has Potential Limitations for Threatened and Endangered Species Some respondents noted that while implementing the suggestions might entice more people to participate in the programs and address disincentives that were identified, doing so would not necessarily benefit threatened and endangered species in all cases. For example, according to some respondents, allowing for more management or variable buffer widths under CRP may increase participation in that program because it would address landowner resistance to the current rules; however, according to other respondents, such an action may ultimately be to the detriment of any threatened, endangered, or other at-risk species that depend on certain conditions in these areas. Similarly, a few respondents noted that reducing the paperwork requirements for CSP may result in the loss of exactly the kind of information NRCS needs to document good conservation—including benefits to threatened and endangered species— for participation in the program. While only 5 of the 18 FWS officials we interviewed felt that USDA programs in their current forms provide great to very great benefits to threatened and endangered species, many stated that the programs have a lot of potential to benefit these species. FWS officials offered some specific suggestions to orient USDA’s programs more toward protecting threatened and endangered species. Some FWS officials suggested committing a certain percentage of programs’ budgets to projects benefiting these species, while others recommended targeting USDA spending to specific geographic areas that have high priority species and habitat needs. Agency Coordination to Benefit Threatened and Endangered Species Occurs Primarily at State and Local Levels and Agency Officials Cited Staff Motivation as Key to Successful Coordination USDA and FWS officials stated that coordination of their conservation efforts to benefit threatened and endangered species most often occurs at their field offices at the state and local level and cited personal motivation as a key factor in successful collaborative efforts. However, agency officials acknowledged that the quality of working relationships and the frequency of coordination between USDA and FWS staff varies by location. To improve working relationships and coordination, USDA initiated work on a memorandum of understanding that, among other things, establishes a formal framework for coordination. Although the draft memorandum is a positive step in improving coordination, it currently lacks mechanisms to monitor and report on implementation efforts to help ensure that coordination occurs and is sustained. It also does not include FSA, even though the agency runs the conservation program in USDA that can affect the most agricultural land—the Conservation Reserve Program. Agency Survey Respondents and Other USDA and FWS Officials Stated That Coordination to Benefit Threatened and Endangered Species Occurs Primarily at Their Field Offices at the State and Local Level USDA and FWS officials told us that while coordination between agencies occurs at all levels—headquarters, regional, state, and local—the majority of the work takes place at their field offices at the state and local level in the day-to-day implementation of their programs. Coordination generally involves FWS field office officials providing USDA staff in state and local offices with information about species and habitat needs relevant to conservation program decisions, while NRCS officials, who are often soil scientists and civil engineers, provide surveying and engineering expertise to FWS staff on the design and construction of specific conservation projects. Some NRCS officials told us that they routinely include FWS biologists in the onsite evaluations they conduct of WRP applications. For example, in Oklahoma, a FWS biologist serves on NRCS’s wetland review team with NRCS and state agency officials, making site visits and ranking applications. FWS biologists assist USDA staff with ranking the biological value of WRP applications and, for those applications that are approved, commenting on the types of vegetation and level of restoration that should be implemented to benefit at-risk species. In some cases, USDA and FWS may also jointly fund projects, although there are some restrictions on how funds from different federal programs may be combined. Officials told us that working together to secure funds from multiple programs across agencies can be particularly helpful to landowners who otherwise would not have been able to undertake a conservation project if they received funds from just one program. For example, NRCS and FWS jointly funded a riparian restoration project to improve habitat for the endangered shiner minnow in Calhoun County, Iowa. NRCS provided funds through WHIP for excavation work along the stream bank, as well as the purchasing of stone for stream bank stabilization. FWS funds covered all structural costs associated with the project, including the installation of stone barriers within the stream. The joint financial contributions by both agencies helped to significantly lower the total project cost to the landowner. The agencies have also worked together to help streamline the consultation requirements of the Endangered Species Act. Under the act and its implementing regulations, NRCS must consult with FWS on each conservation project it funds that may affect a threatened or endangered species to ensure the projects are not likely to jeopardize the continued existence of the species or adversely modify designated critical habitat. We have previously reported that agency officials and private entities that must go through this process complain that it is time consuming and frustrating; some agency officials reiterated those concerns during this review. To address such concerns, FWS works with agencies to develop programmatic consultations that set forth parameters or guidelines for how specific actions might be conducted in order to avoid adverse effects to species and their habitats. If such guidance is followed, the subsequent consultation should presumably go more quickly. In Florida, for example, the FWS field office developed a programmatic consultation for conservation actions that NRCS commonly uses, such as controlled burning and mowing, activities that might harm the threatened eastern indigo snake. In developing the programmatic consultation, FWS and NRCS reached agreement on the best management practices to be used when implementing the conservation actions in order to avoid adversely harming the snake or its habitat. According to NRCS and FWS officials, programmatic consultations can dramatically reduce the amount of time spent consulting with FWS on projects. USDA and FWS also collaborate on broader conservation projects involving other government agencies and nongovernmental organizations. These collaborations include: State and local agency initiatives. USDA and FWS work together with state and local agencies on conservation initiatives. For example, in an effort to address the loss of wetlands, officials in Kane County, Illinois, requested assistance from NRCS and FWS. Based on maps of groundwater recharge areas and extensive soil and topographic surveys from NRCS, together with information about the plant and animal communities relying on the wetlands in the county from FWS, the agencies assisted county officials in identifying wetlands that were in most need of protection. Their actions, according to a NRCS official, also contributed to improving water quality, educating the local public on the importance of protecting wetlands, and helping the county’s forestry division identify potential lands for public ownership. NRCS State Technical Committees. NRCS established these committees in every state to assist in making technical recommendations on issues relating to the implementation of natural resource conservation activities and programs. Committee members include representatives from NRCS, FSA, FWS, and other federal agencies; state agriculture and wildlife agencies; nongovernmental organizations; and private landowners. Recommendations are made by the committee for consideration by the implementing USDA program agency. Survey respondents and other officials told us that committee work and discussions among members can identify opportunities to coordinate on specific projects to benefit threatened and endangered species. For example, discussions among committee members in Ohio led to FWS working on a CRP project—and making recommendations to modify the implementation of the project— that improved the possibility of providing habitat for the threatened copperbelly water snake. FWS and FSA officials worked together with the landowners to incorporate the modifications into the project. Habitat Joint Ventures. Habitat joint ventures were established in the late 1980s to help implement the North American Waterfowl Management Plan. Their purpose is to restore, protect, and enhance waterfowl habitat on a regional scale throughout North America; there are 11 habitat joint ventures in the United States. Each joint venture is comprised of numerous public and private entities. A key aspect of these joint ventures is to identify funding sources for needed conservation and to prioritize projects to receive that funding. USDA and FWS are members on these joint ventures and provide technical and financial assistance to implement projects to restore and enhance habitat and protect waterfowl. While the primary purpose of the joint ventures is waterfowl, habitat important for waterfowl is also often important for threatened and endangered species. At the national level, USDA and FWS coordinate on developing program regulations, policy, and training. For example, the agencies have recently begun joint training sessions on the consultation process required by the Endangered Species Act. The training is ultimately expected to be offered to local USDA staff in an effort to help them better understand and navigate the consultation process. Officials noted that such sessions also help FWS staff to better understand USDA’s programs and become more familiar with USDA staff. Additionally, the agencies have worked together at the national level to develop the criteria used in evaluating and ranking proposed CRP projects. These projects are assessed, among other things, on their expected environmental benefits to soil resources, water quality, and wildlife habitat. Officials in headquarters offices have also worked together in developing conservation practices and standards for USDA and FWS conservation programs. While survey respondents provided many examples of successful coordination between USDA and FWS for the benefit of threatened, endangered, and other at-risk species, they also indicated that the level of coordination that occurs at the local office level varies considerably— ranging from extremely good to not good at all. We also found this to be the case during interviews with agency officials. For example, several USDA officials stated that they work closely with FWS in implementing conservation programs, such as WRP and CRP, and often share information concerning threatened and endangered species. However, other officials we interviewed said that coordination between USDA and FWS was limited or generally poor and only occurs in limited situations, such as when construction is involved on a project. Similarly, several USDA officials stated that they coordinate with FWS principally on state conservation plans or through e-mail when necessary. Still, some agency officials we interviewed noted that despite past problems between USDA and FWS, coordination is improving. Survey Respondents and Other Agency Officials Cited Staff Motivation as a Leading Factor in Successful Coordination USDA survey respondents and FWS officials we interviewed most often stated that the personal motivation of staff was a leading factor in successful collaboration between USDA and FWS. Specifically, officials noted that individuals who possessed a strong commitment to coordinate, had good interpersonal skills, and demonstrated a willingness to work with others were often the driving force behind successful collaborative efforts. For example, one USDA survey respondent reported that it was the personal attitude of the FWS official working with USDA that made the difference in helping to establish habitat for the threatened copperbelly water snake in Ohio. His positive attitude in working with USDA staff, commitment in attending meetings, and willingness to actively participate all contributed significantly to the success of their collaboration. Similarly, a FWS respondent noted that the people skills and collaborative attitude of NRCS and FWS staff were linchpins in completing a watershed project on the upper Little Red River in Arkansas, a project that improved habitat for a listed species of mussel and a candidate species of fish. Commonly-shared goals and management support and direction for collaboration were other important factors that contribute to successful collaboration highlighted in our survey and in interviews with agency officials. For example, FWS officials reported that successful coordination in Montana has resulted largely from direction provided by the NRCS state conservationist who put an emphasis on threatened, endangered, and other at-risk species for EQIP and WHIP and makes funding decisions for these programs at the state level (as opposed to the county level as done in other states). Trust was another important factor cited. Unfortunately, trust between agencies is not something that can be dictated from management; it takes time to develop. Learning about other agencies’ programs and becoming familiar with counterparts at other agencies are important components to this process. In some cases, this process has been expedited by having staff from one agency collocated at another agency’s offices. For example, in Colorado, two FWS officials are located at NRCS offices in the state to help address threatened and endangered species and other wildlife issues. Similarly, in Texas, an official from the Texas Parks and Wildlife Department is collocated with the NRCS state office. According to Texas officials, this close contact has been very beneficial to promoting a better understanding of each agency’s respective programs and how they can work together. USDA and FWS Are Working to Improve Coordination Efforts through a Memorandum of Understanding for At-Risk Species; however, the Memorandum Lacks Key Elements NRCS has drafted a memorandum of understanding with FWS and AFWA to establish and maintain a framework of cooperation to proactively conserve at-risk plant and animal species and their habitats. Initial efforts on the memorandum began in January 2005, under the direction of the chief of the NRCS, with the aim of developing a mechanism that would allow the agency to better utilize its programs to address the needs of declining species. Currently, the draft memorandum states that its purpose is to strengthen cooperation among NRCS, FWS, and AFWA to proactively conserve at-risk plant and animal species and their habitats. The memorandum also states that it is the intent of NRCS, FWS, and AFWA to identify and create more opportunities to work together to preempt the need to list additional species under the Endangered Species Act, foster the recovery of species already listed, and address similar needs for species that are of conservation concern to states. Under the draft memorandum, NRCS, FWS, and AFWA would be responsible for taking individual and joint actions to more effectively meet their obligations and priorities for conserving at-risk species and their habitats. The draft memorandum stresses the importance of federal and state fish and wildlife agencies participating on USDA’s state technical committees. Additionally, the draft memorandum directs NRCS to provide information to FWS and state fish and wildlife agencies about NRCS- administered programs that could assist them in meeting species’ needs. These actions and others in the draft memorandum focus on sharing information about species and habitat needs and where conservation program funds might be available to address these needs. Moreover, the draft memorandum addresses actions between NRCS and FWS to streamline regulatory processes, such as the Endangered Species Act consultation process. To help evaluate the effectiveness of the memorandum of understanding, the draft document states that NRCS, FWS, and AFWA will develop protocols for gathering data for reporting and assessing the effectiveness of conservation efforts for at-risk species and their habitats; however, the memorandum does not include any specific monitoring or reporting responsibilities. In addition, the draft memorandum does not include FSA even though CRP enrolls nearly 36 million acres of land each year. NRCS officials told us that FSA was not included in the drafting of the memorandum because adding another entity would have slowed down the development and review process. NRCS and FSA officials said they saw no reason why FSA could not be added to the agreement in the future. While intrinsically valuable, interagency coordination is not always easy. Each agency has its own unique mission and program priorities, regulations, and organizational culture. Sometimes coordinating within an individual agency can be challenging as well. Based on literature reviews, expert interviews, and reviews of numerous coordination efforts among agencies, in an October 2005 report, we identified eight practices that help enhance and sustain collaboration. Among the practices highlighted in the report were the need to define and articulate a common outcome; identify and address needs by leveraging resources; agree on roles and responsibilities; and develop mechanisms to monitor, evaluate, and report on the results of collaborative efforts. In the report, we pointed out that federal agencies engaging in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. We found that reporting on these activities can provide key decision makers within the agencies, as well as clients and stakeholders, important feedback that they can use to improve both policy and operational effectiveness. We recognize that the memorandum of understanding is still in draft form and believe that once finalized, it could contribute to better coordination for threatened, endangered, and other at-risk species. In fact, the draft memorandum embraces many of the actions that survey respondents highlighted as examples of successful coordination, such as using state technical committees to better implement on-the-ground conservation, sharing information, and leveraging resources. The draft memorandum also contains some of the elements that we have previously identified as being important to successful collaborative efforts. For example, the draft memorandum articulates a common outcome, defines roles and responsibilities, and discusses the need to share information in order to leverage resources as well as develop protocols to produce comparable data for reporting and assessing on their efforts. However, the draft document does not have monitoring and reporting mechanisms for ensuring that coordination takes place, including who will be responsible for monitoring and reporting, and the time frames for doing so. Without such elements, NRCS, FWS, and AFWA cannot be assured that a goal of the draft memorandum—improved coordination for the benefit of threatened, endangered, and other at-risk species—will be achieved. In particular, given that we found that successful coordination between USDA and FWS is largely driven by staff motivation, without follow-up to monitor and report on implementation status, efforts pursuant to the draft memorandum may simply maintain the status quo—those who want to coordinate will coordinate, and others will not. Furthermore, FSA is not a partner to the draft memorandum. With nearly $1.9 billion in conservation investments and about 36 million enrolled acres, CRP—under FSA’s administration—has the potential to provide significant benefits to imperiled species. Conclusions The extent to which viable habitat for threatened, endangered, and other at-risk species can be established on private lands is certain to be the subject of ongoing debate within the environmental and agricultural communities and in the Congress. Because the majority of land in the United States is privately-owned, programs that encourage private landowners to implement conservation actions on their lands are critical to protecting imperiled species. USDA’s conservation programs provide billions of dollars annually to agricultural producers and others for taking steps to address a myriad of environmental and natural resource concerns, including restoring wildlife habitat. As Congress and federal agencies consider legislative and programmatic alternatives to better address at-risk species, it is essential that we understand the factors that might motivate a private landowner to choose to participate in conservation programs to benefit imperiled species. While financial incentives weigh heavy in a landowner’s decision, other factors such as fears about regulatory and paperwork burdens also play a role. Taking steps to increase landowner participation in USDA programs, however, must be complimented by efforts to ensure that the intended benefits to species are meaningful. Moreover, improving coordination between USDA and FWS—the nation’s experts on conserving natural resources and threatened and endangered species—should help ensure that conservation program investment decisions provide the most benefit to threatened, endangered, and other at-risk species and their habitats as possible. While the draft memorandum of understanding between the two agencies is an important step toward improving coordination, without monitoring and reporting mechanisms, NRCS and FWS lack important tools for ensuring the effectiveness and sustainability of their collaborative efforts. Furthermore, the draft memorandum omits FSA, a key agency that administers CRP, the largest conservation program in the United States—and thus fails to capitalize on an opportunity to coordinate investments from this $2 billion program to better address at-risk species and their habitats. Recommendations for Executive Action To enhance and sustain coordination at USDA’s and FWS’s field offices at the state and local level for the benefit of threatened, endangered, and other at-risk species, we recommend that the Secretaries of Agriculture and of the Interior: direct the Chief of NRCS and the Director of FWS to work with AFWA to incorporate monitoring and reporting mechanisms in their memorandum of understanding prior to finalizing it for implementation; and direct the Chief of NRCS, the Administrator of FSA, and the Director of FWS, in cooperation with AFWA, to include FSA as an additional partner to the memorandum or develop a separate memorandum of understanding to address coordination. Agency Comments and Our Evaluation We provided a draft of this report to the Departments of the Interior and Agriculture for review and comment. Interior provided written comments (see app. II) and USDA provided oral comments. The departments generally agreed with our findings and recommendations. However, the Department of the Interior suggested that we direct our recommendations to NRCS instead of NRCS and FWS together, because our report specifically addresses USDA conservation programs and that NRCS is the lead agency in the memorandum of understanding. While we understand Interior’s position, the existing program management arrangement set forth in the draft memorandum of understanding makes it necessary to address our recommendations to both agencies. Specifically, although NRCS initiated development of the draft memorandum, the document does not specify that NRCS is the lead agency for preparing and implementing it. Rather, USDA, FWS, and AFWA appear as co-equal parties to the memorandum. The Department of the Interior also suggested that both recommendations should recognize AFWA as a partner to the memorandum of understanding. We agree and have modified the recommendations to direct the federal agencies to work with AFWA to implement our recommendations. With respect to our second recommendation, Interior suggested allowing the agencies the option of developing a separate memorandum for addressing coordination with FSA. We have modified our recommendation to reflect this suggestion. The departments also provided technical comments that we have incorporated into the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Agriculture and the Interior and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributors to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology The objectives of our study were to identify (1) stakeholder views on the incentives and disincentives for landowners to benefit threatened and endangered species and their habitats through participation in U.S. Department of Agriculture (USDA) conservation programs as well as suggestions for addressing disincentives to program participation, and (2) how USDA and the U.S. Fish and Wildlife Service (FWS) are coordinating their programs for the benefit of threatened and endangered species and their habitats and the factors that agency officials believe have contributed to successful coordination. Incentives, Disincentives, and Suggestions To identify incentives, disincentives, and suggestions to address the disincentives for participating in USDA conservation programs, we reviewed the statutes, regulations, and policies for the programs as well as other independent reviews of them. We also interviewed USDA headquarters officials to obtain information on how these programs were implemented at the national, state, and local levels. In addition, we conducted site visits in California, including Yolo and Merced counties, and Texas, including San Saba and Travis counties, to discuss state and local level implementation of the programs and to observe on-the-ground implementation of select conservation projects. We also conducted telephone surveys with USDA and soil and water conservation district officials, and private landowners. Telephone Surveys We conducted telephone surveys with a nonprobability sample of 157 USDA officials, soil and water conservation district officials, and landowners from 19 states (Arkansas, California, Colorado, Florida, Georgia, Hawaii, Illinois, Iowa, Massachusetts, Minnesota, Missouri, Montana, Nebraska, New Mexico, Ohio, Oklahoma, Oregon, Pennsylvania, and Washington). We selected these states based on three criteria: (1) high levels of USDA conservation program allocations for the programs we reviewed, (2) high or moderate numbers of threatened and endangered species relative to other states, and (3) diversity of geographic location. Within these states, we selected at least two counties—in some cases as many as four—that had high levels of USDA conservation program obligations and had significant threatened and endangered species occurrences and diversity in comparison with other counties in the state. We surveyed officials in 49 counties across the 19 states. In the different states, we surveyed (1) the state biologist or the state conservationist in USDA’s Natural Resource Conservation Service (NRCS), who are responsible for helping to implement or administer many of the department’s conservation programs and (2) the executive director or another state-level official in USDA’s Farm Service Agency (FSA), which administers USDA’s largest conservation program. In the different counties we selected, we surveyed (1) the NRCS district conservationist, the lead official for administering the agency’s programs at the county level; (2) soil and water conservation district officials, who work with USDA to increase voluntary conservation practices among landowners; and (3) private landowners. The NRCS district conservationists identified an initial list of landowners. We selected a nonprobability sample of landowners from this list using criteria to include landowners who participate in the USDA conservation programs as well as those who were eligible to participate but chose not to do so, and to reflect geographic diversity across the 19 states. In total, we interviewed 71 NRCS officials, 18 FSA officials, 44 soil and water conservation district officials, and 24 landowners. In some cases, soil and water conservation district officials were also landowners, and they responded to our questions from both perspectives. We conducted seven pretests with officials in headquarters and the field and one landowner. After each pretest, we conducted an interview to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) the questionnaire placed an undue burden on the respondents, and (4) the questions were unbiased. On the basis of the pretests, we made appropriate revisions to the survey. Through our telephone survey, we gathered participants’ opinions about the primary incentives, disincentives, and suggestions to address the disincentives for landowners to participate in seven USDA conservation programs for the benefit of threatened and endangered species. We asked interviewees to identify the USDA conservation programs they had knowledge of, and only asked them questions relevant to those programs. The survey also included questions specifically for landowners regarding their participation in the conservation programs. The survey asked a combination of questions that allowed for open-ended and close-ended responses. To analyze the open-ended material, we developed clear protocols for coding the content into categories. The material was independently coded by one individual and then verified by another individual. We initially selected seven conservation programs to include in our review, based on the amount of dollars obligated to these programs and the extent to which they might offer benefits to threatened and endangered species. These were the Conservation Reserve Program, Conservation Security Program, Environmental Quality Incentives Program, Farm and Ranch Lands Protection Program, Grassland Reserve Program, Wildlife Habitat Incentives Program, and Wetlands Reserve Program. USDA confirmed that these programs were appropriate given our objectives. We dropped the responses we collected with respect to the Farm and Ranch Lands Protection Program from our analysis due to the lack of familiarity by most respondents with the program. Coordination To determine how USDA and FWS are coordinating for the benefit of threatened and endangered species and their habitats, and the factors that contributed to successful examples of such efforts, we included questions in the survey with respect to coordination between the two agencies that were posed to USDA officials as well as 18 FWS officials in state and regional offices in our 19-state nonprobability sample. We asked the USDA and FWS officials to comment on the quality of coordination between the agencies at varying levels of government; to provide examples of good coordination for the benefit of threatened and endangered species in their area; and to identify the factors they believed contributed to successful coordination. In addition, we also interviewed FWS and USDA officials at each agency’s headquarters in Washington, D.C., about formal coordination efforts between the agencies to benefit threatened and endangered species. We also used our site visits in California and Texas to discuss these issues with USDA and FWS officials as well as meet with officials from state fish and wildlife agencies. We performed our work between November 2005 and October 2006 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of the Interior Appendix III: Conservation Reserve Program The Conservation Reserve Program (CRP) is one of the USDA’s largest and most ambitious conservation efforts, with approximately 36 million acres enrolled and annual payments totaling nearly $1.8 billion through June of 2006. Administered by USDA’s Farm Service Agency (FSA), CRP was established by the Food Security Act of 1985 and currently operates in all 50 states. The purpose of CRP is to provide financial incentives to landowners to conserve and improve soil, water, air, and wildlife resources by converting land in agricultural production to less intensive uses. Program participants agree to adopt a variety of approved conservation practices such as installing structures, planting vegetation, or implementing management techniques. The Conservation Reserve Enhancement Program (CREP) is a subprogram of CRP that is implemented on a state-by-state basis. Governors request that CREP be implemented in their state to address specific state and nationally significant agriculture-related environmental problems, and commit to providing a portion of the funds necessary to do so. Of foremost concern to CREP are issues relating to water supplies and areas around wells, wildlife species endangered by the loss of essential habitat, soil erosion, and reduced habitat for fish such as salmon. Eligibility In order to be eligible for CRP and CREP, a producer must have owned and operated the eligible land for at least 12 months prior to close of the CRP sign-up period; however, this requirement can be waived under certain conditions. In addition, the land must meet one of several criteria in order to achieve overall program goals, such as having a weighted average erosion index of eight or higher, or being located in a national or state CRP conservation priority area. cropland that is planted or considered planted to an agricultural commodity for four of the previous six crop years from 1996 to 2001, and is physically and legally capable of being planted in a normal manner to an agricultural commodity; certain marginal pastureland that is enrolled in the Water Bank Program or suitable for use as a riparian buffer or for similar water quality purposes; or currently enrolled CRP land nearing expiration of its contract. Application Process Farm owners and operators can apply and eventually enroll their land in CRP in two ways, through general or continuous sign-up. General sign-up generally occurs for a few weeks each year. For both general and continuous sign-up, applicants must appear at one of FSA’s 2,351 offices and formally enter into a CRP contract. The contract contains information on the participant (e.g., name, address, Social Security number, and phone number) and information on the conservation practices agreed to, the acreage enrolled, and the acreage committed to each practice. Continuous CRP sign-up, in contrast to general sign-up, is available at any time of year for owners who agree to adopt certain high-priority conservation practices. These practices include installation of filter strips, riparian buffers, grass waterways, shelterbelts, field windbreaks, living snow fences, salinity reducing vegetation, shallow water areas for wildlife, and wetland restoration. Continuous sign-up participants, like general sign-up participants, sign contracts and agree to certain stipulations in return for payments. Enrollment in CREP occurs on a continuous basis, permitting farmers and ranchers to join the program at any time rather than waiting for specific sign-up periods. Enrollment in each state is limited to specific geographic areas and practices. A CREP project begins when a state, Indian tribe, local government, or local nongovernmental entity identifies an agriculture-related environmental issue of state or national significance. These parties and FSA then develop a project proposal to address particular environmental issues and goals. CREP, therefore, is a partnership program among federal and state governments and other program participants, and USDA expects non-federal partners to provide commitments toward the overall cost of the program. Selection Process After applications are screened against program eligibility criteria, FSA program staff evaluates them using an environmental benefits index that weighs six factors: (1) wildlife habitat benefits; (2) water quality benefits from reduced erosion, runoff, and leaching; (3) on-farm benefits of reduced soil erosion; (4) enduring environmental benefits; (5) air-quality benefits from reduced wind erosion; and (6) cost. FSA officials at the national level identify an environmental benefit index score cutoff value to determine which applications to accept after analyzing and ranking all eligible offers. FSA strives to ensure that, by using the index, only the most environmentally sensitive lands are selected and that all offers are considered fairly and equitably. CRP is a competitive program, therefore producers who may have met previous signup index cutoffs are not guaranteed a contract under future sign-ups. As previously noted, under continuous sign-up, all applicants that meet eligibility requirements are accepted, provided acreage limits are not exceeded. CREP applications are selected based on the extent to which they improve water quality, erosion control, and wildlife habitat related to agricultural use in specific geographic areas, where specific environmental concerns are of a high priority. CREP applications are submitted to USDA by the governor of a state that is involved in the application, after which USDA will convene an interagency panel to review the proposal. The comments of the panel are forwarded to the state for consideration in the development of a final proposal that is set forth in a memorandum of agreement between the governor and the Secretary of Agriculture. As of June 2006, there were 37 CREP agreements in effect in 29 states. Payments and Conditions CRP contracts generally require a 10- to 15-year commitment. By signing a contract, participants agree to apply specific conservation practices on their land, to file forms needed to determine limits on payments, and to perform certain management work. USDA and the participant agree on a conservation plan that describes the vegetative or water cover to be established, completion dates, and estimated environmental benefits. Agency officials primarily rely on data provided by participants to determine compliance with the agreement, but will also make occasional spot checks of the land. In return for implementing conservation practices, general CRP participants receive annual rental payments that average about $48 an acre (payments vary with prevailing local rental rates, not exceeding local dryland or non-irrigated rates). In addition, participants receive cost-share payments for up to one-half the cost of implementing approved conservation practices. Furthermore, maintenance incentive payments are available where an additional amount up to $5 per acre may be included with the annual rental payment to perform certain maintenance obligations. Additional incentives of up to 20 percent of the annual payment are available for certain continuous sign-up practices (defined below). Participants may also receive technical assistance from a handful of entities, including USDA’s Natural Resources Conservation Service (NRCS), which provides technical land-eligibility determinations and advice on conservation planning and implementation techniques. Under continuous CRP, FSA will offer annual rental payments as well as financial incentives of up to 20 percent of the soil rental rate for specific conservation practices, and an additional 10 percent can be added for land located with EPA-designated wellhead protection areas. Continuous sign-up enrollees may also receive added up-front and annual financial incentives for participation. Incentive payments to encourage practices supported by continuous sign-up can include $100 to $150 an acre for selected practices (depending on contract length) and single payments of up to 40 percent for the cost of installing the practice (known as a practice incentive payment). Like CRP, CREP contracts require a 10- to 15-year commitment to keep lands out of agricultural production. FSA uses CRP funding to pay a percentage of the program’s cost, while state, tribal governments or other non-federal sources provide the balance of the funds. States and private groups involved in the effort may also provide technical support and other in-kind services. A federal annual rental rate, including an FSA state committee-determined maintenance incentive payment, is offered, plus a cost-share of up to 50 percent of the eligible costs to install the practice. Participants may also obtain 20 percent annual bonus payments, above the rental payment, for installing certain high priority practices such as certain types of filter strips or riparian buffers. Furthermore, the program generally offers a sign-up incentive for participants to install specific practices. Summary of Selected Survey Responses The following responses for incentives, disincentives, and suggestions for addressing disincentives to participating in USDA conservation programs for the benefit of threatened and endangered species and their habitats are those that were most frequently identified for CRP by the officials and landowners we surveyed. These responses may differ slightly than those identified in the body of this report because, in the report, we only include the responses that were identified most frequently across the majority of the six programs we reviewed. The most frequently identified incentives for participation in CRP included: (1) financial; (2) a personal interest in conservation; and (3) program criteria that give greater consideration to projects that directly address threatened and endangered, and other at-risk species. The most frequently identified disincentives for participation in CRP included: (1) limited funding for both the program and participants, (2) restrictive eligibility and participation requirements, and (3) fears about government regulations. Suggestions most frequently identified to address disincentives for CRP participation included: (1) increasing funding, (2) providing greater education and outreach, and (3) increasing flexibility in program eligibility and participation. Appendix IV: Conservation Security Program Introduction The Conservation Security Program (CSP) was first authorized in the Farm Security and Rural Investment Act of 2002 and is administered by the USDA’s Natural Resources Conservation Service (NRCS). CSP is generally regarded as the most comprehensive green payments program developed in the United States, primarily because CSP promotes integrated, whole- farm planning for conservation. Similar to other USDA conservation programs, CSP provides financial and technical assistance to producers to promote conservation and the improvement of soil, water, air, energy, and plant and animal life on private and tribal agricultural lands. In contrast to the other programs, CSP provides assistance to farmers and ranchers who already meet specified standards of conservation and environmental management in their operations. CSP rewards three levels, or tiers, of conservation treatment for qualified producers who enter into CSP contracts with NRCS, and provides higher payments as landowners increase the level of conservation implemented on their lands. Although CSP is available only in selected watersheds in all 50 states, the intent is to implement the program in all watersheds by 2011. NRCS held the first CSP sign-up in fiscal year 2004, which led to contracts covering nearly 1.9 million acres in 18 watersheds across 22 states, and about $34.6 million in payments to landowners. In fiscal year 2005, over 9 million acres in 220 watersheds across all 50 states and Puerto Rico were covered, with payments totaling about $171.4 million (including payments for contracts approved in 2004). Eligibility CSP is available to farmers and ranchers who already meet specified standards of conservation and environmental management in their operations. To be eligible, landowners must meet several criteria including: (1) land must be private agricultural land, forested land that is an incidental part of an agricultural operation, or tribal land, with the majority of the agricultural operation located within a selected priority watershed; (2) the applicant must be in compliance with highly erodible land and wetlands provisions of the Food Security Act of 1985 and generally must have control of the land for the life of the contract; and (3) the applicant must share in the risk of producing any crop or livestock and be entitled to a share in the crop or livestock available for marketing from the operation. Lands that are enrolled in the Conservation Reserve Program, the Wetlands Reserve Program, or the Grasslands Reserve Program are not eligible for CSP. Application Process NRCS offers periodic sign-ups in specific, priority watersheds. The agency requires producers to complete a self-assessment, which includes a description of the conservation activities on their operations, to determine their eligibility for the program. Once NRCS determines eligibility, landowners meet with local NRCS staff to discuss their application. In addition to the self-assessment, applicants must submit completed program applications, and two years of written documentation on their implementation of certain conservation actions, including fertilizer, nutrient, and pesticide application schedules, tillage, and grazing schedules, as applicable. Selection Process Soil quality practices include crop rotations, cover crops, tillage practices, prescribed grazing, and providing adequate bind barriers. Water quality practices include conservation tillage, filter strips, terraces, grassed waterways, managed access to water courses, nutrient and pesticide management, prescribed grazing, and irrigation water management. participant’s entire operation and must generally treat an additional resource concern by the end of the contract period. Tier III participants must have addressed all other applicable resource concerns, including wildlife habitat, to a minimum level on their entire agricultural operation prior to acceptance. Some state NRCS offices used targeted species assessment criteria, while others used general wildlife assessment criteria. According to an NRCS official, because habitat needs differ across the nation, it is not possible to develop one set of criteria that would work for the whole country and apply to all situations in determining which producers would qualify for a given tier level. Because of these differences, national guidance instructs each state to define its own minimum criteria for each of the listed wildlife resource components in the national guidance based upon the state’s own set of conditions. For example, for cropland, the national guidance identifies the amount of noncrop vegetative cover such as woodlots, wetlands, or riparian areas managed for wildlife as a component that must be addressed and instructs NRCS state offices to define the minimum percentage of noncrop vegetative cover. In addition to these tiers, NRCS establishes enrollment categories and subcategories. For the fiscal year 2005 sign-up, five enrollment categories were used for cropland, pasture, and rangeland. For example, for cropland, the enrollment categories were defined by various levels of soil conditioning index scores and the number of stewardship practices and activities in place on the farm for at least 2 years. If an enrollment category could not be fully funded, subcategories were used to determine application funding order within a category. For the fiscal year 2005 sign- up, 12 subcategories were used, including the factor of whether the agricultural operation is in a designated area for threatened and endangered species habitat. Payments and Conditions Each of the three CSP tiers has a specified annual payment limit and contract period. Tier I contracts are for 5 years and provide annual payments of up to $20,000. Tier II contracts are for 5 to 10 years and provide annual payments of up to $35,000. Tier III contracts are also for 5 to 10 years, but can provide annual payments of up to $45,000. These payments may be comprised of four components: (1) an annual stewardship component for the base level of conservation treatment required for program eligibility (a payment that is calculated separately for each land use based on eligible acres, the stewardship payment rate, and other factors), (2) an annual existing practice component for the maintenance of existing conservation practices (these are calculated as a flat rate of 25 percent of the stewardship payment), (3) a one-time new practice component for additional approved practices, and (4) an annual enhancement component for additional activities that provide increased resource benefits beyond the base level and conservation treatment that is required for program eligibility. Currently under CSP, annual enhancement payments may be made for five types of activities: (1) the improvement of a significant resource concern to a condition that exceeds the requirement for the participant’s tier of participation and contract requirements; (2) an improvement in a priority local resource condition, as determined by NRCS, such as water quality or wildlife; (3) participation in an on-farm conservation research, demonstration, or pilot project; (4) cooperation with other producers to implement watershed or regional resource conservation plans that involve at least 75 percent of the producers in the targeted area; and (5) implementation of assessment and evaluation activities relating to practices included in the conservation security plan, such as gathering plant samples for specific analysis. Summary of Selected Survey Responses The following responses for incentives, disincentives, and suggestions for addressing disincentives to participating in USDA conservation programs for the benefit of threatened and endangered species and their habitats are those that were most frequently identified for CSP by the officials and landowners we surveyed. These responses may differ slightly than those identified in the body of this report because, in the report, we only include the responses that were identified most frequently across the majority of the six programs we reviewed. The most frequently identified incentives for participation in CSP included: (1) financial, (2) recognition for good stewardship, and (3) a personal interest in conservation. The most frequently identified disincentives for participation in CSP included: (1) burdensome paperwork requirements, (2) restrictive eligibility and implementation requirements, (3) fears about government regulations, and (4) limited funding for both programs and participants. Suggestions most frequently identified to address disincentives for CSP participation included: (1) greater education and outreach, (2) increasing funding, and (3) streamlining processes. Appendix V: Environmental Quality Incentives Program Introduction The Environmental Quality Incentives Program (EQIP) is administered by USDA’s NRCS and provides technical and financial assistance to farmers and ranchers to address soil, water, air, and related natural resources concerns, and encourages enhancements on lands to be made in an environmentally beneficial and cost-effective manner. NRCS provides assistance to agricultural producers in a manner that promotes agricultural production and environmental quality as compatible goals, and assists participants in complying with federal and state environmental laws. The Federal Agriculture Improvement and Reform Act of 1996 first authorized EQIP, which has been reauthorized and amended in the Farm Security and Rural Investment Act of 2002. EQIP generally focuses on five national priorities: promoting at-risk species habitat conservation; reducing non-point source pollution; conserving ground and surface water resources; reducing air emissions, such as particulate matter and nitrogen oxides; and reducing soil erosion and sedimentation. A locally-led process adapts the national priorities to address local resource concerns and identifies which conservation practices will be eligible for financial assistance in each state. NRCS state conservationists can delegate the authority to administer parts of the program to the local level—because of this, EQIP implementation can differ between states and even between counties. Participants receive cost-share and incentive payments under contracts that last for at least one year after the practices have been implemented, and at most, for 10 years. In fiscal year 2005, NRCS obligated more than $794 million in financial assistance to enter into more than 49,000 EQIP contracts. Despite the sizeable allocation, an additional 33,000 applications went unfunded that year. In fiscal year 2006, NRCS obligated an estimated $1 billion for EQIP. Eligibility EQIP is available in all 50 states. To be eligible, applicants must be engaged in livestock or agricultural production. State and local governments are not eligible for EQIP payments. Applicants must be in compliance with the highly erodible land and wetland conservation provisions of the Food Security Act of 1985, which aim to discourage farmers from producing crops on wetlands or highly erodible land without erosion protection, and their average adjusted gross income for the preceding three years must not exceed $2.5 million, in accordance with the Farm Security and Rural Investment Act of 2002. Lands that are eligible include those where agricultural commodities or livestock are produced, including cropland; rangeland; grassland; pasture land; private, non- industrial forestland; and other land determined to pose a serious threat to soil, air, water, or related resources. Lands that are already under a Conservation Reserve Program contract are not eligible for EQIP. Application Process Applicants may apply for EQIP through a continuous sign-up process by submitting applications to local USDA offices. The NRCS state conservationist or designee then works with the applicant to develop an EQIP plan of operations. Applications are evaluated periodically. Selection Process NRCS allocates funds from the national level to NRCS state offices based on national priorities. NRCS’s state and local offices then identify their own priority resource concerns and determine the funding allocation to be made from the state offices to local offices in each state. State and local NRCS offices select eligible conservation practices and create lists of their costs to address priority resource concerns, and then develop a ranking process to guide the selection and prioritization of applications. This locally-led process is guided by advice from the NRCS state technical committee and associated local working groups in each state. The NRCS state conservationist, or designated local conservationist, ranks each application using the locally-developed ranking process. When funds are allocated, the state conservationist or designated conservationist makes offers to those landowners whose applications ranked the highest. Payments and Conditions NRCS offers cost-share and incentive payments to participants in EQIP. Conservation practices that are eligible for cost-sharing are determined by NRCS with advice from state technical committees and local work groups, and may include installing filter strips, manure management facilities, caps on abandoned wells, and other activities. NRCS may provide up to 75 percent of the cost of implementing practices to program participants, and up to 90 percent for limited-resource and beginning farmers and ranchers. The specific cost-share rate for each practice is determined by NRCS with advice from state technical committees and local work groups. Incentive payments may be made to encourage a participant to perform certain land management practices that they might not otherwise implement, such as wildlife habitat or irrigation water management. Incentive payment rates and amounts are set by NRCS with advice from state technical committees and local work groups and may be provided for up to three years. Summary of Selected Survey Responses The following responses for incentives, disincentives, and suggestions for addressing disincentives to participating in USDA conservation programs for the benefit of threatened and endangered species and their habitats are those that were most frequently identified for EQIP by the officials and landowners we surveyed. These responses may differ slightly than those identified in the body of this report because, in the report, we only include the responses that were identified most frequently across the majority of the six programs we reviewed. The most frequently identified incentives for participation in EQIP included: (1) financial benefits; (2) program criteria that give greater consideration to projects that directly address threatened, endangered, and other at-risk species; (3) a landowner’s personal interest in conservation; and (4) receiving technical assistance. The most frequently identified disincentives for participation in EQIP included: (1) limited funding for both the program and participants, (2) burdensome paperwork requirements, (3) fears about government regulations, (4) restrictive eligibility and participation requirements, and (5) that program implementation can hinder current or future agricultural production. Suggestions most frequently identified to address disincentives for EQIP participation included: (1) increasing funding, (2) providing greater education and outreach, (3) streamlining paperwork requirements, and (4) increasing flexibility in program eligibility and participation. Appendix VI: Grassland Reserve Program The Grassland Reserve Program (GRP) helps landowners and operators restore and protect grassland, including rangeland, pastureland, shrub land, and certain other lands, while maintaining some grazing uses by using a combination of easement, rental, and restoration agreements. GRP emphasizes support for working grazing operations; enhancing plant and animal biodiversity; and protecting grassland and land containing shrubs and forbs under threat of conversion to cropping, urban development, and other activities. GRP is administered by USDA’s NRCS and FSA, in cooperation with the USDA’s Forest Service. GRP was first authorized by the Farm Security and Rural Investment Act of 2002 for up to $254 million through fiscal year 2007, and enrollment is capped at 2 million acres. Eligibility To be eligible for easement agreements under GRP, landowners must show clear title to the land, while both titled landowners and other operators, such as those who rent land for agricultural production, are eligible for rental and restoration agreements. However, other operators must provide evidence that they will have control of the property for the length of a contracted agreement and have landowner concurrence. Individuals or entities that have an average adjusted gross income exceeding $2.5 million for the three tax years immediately preceding the year the contract is approved are not eligible to receive program benefits or payments, except when 75 percent of the adjusted gross income is derived from farming, ranching, or forestry operations. To be eligible for a restoration agreement, NRCS, in consultation with the program participant, must determine if the proposed land needs restoration actions and meets program requirements. GRP is available only for privately owned or tribal lands, and participants generally must enroll at least 40 contiguous acres under an agreement. The types of land that are eligible for enrollment include grasslands; land that contains forbs (including improved rangeland and pastureland or shrub land); or land that is located in an area that historically has been dominated by grassland, forbs, or shrubs that has the potential to serve as wildlife habitat of significant ecological value. Application Process Eligible landowners and operators may provide applications to either NRCS or FSA on a continuous sign-up basis. GRP offers several enrollment options: 30-year and permanent easements; 10, 15, 20, or 30- year rental agreements; and cost-share restoration agreements, which may be used in conjunction with an easement or rental agreement. Selection Process Each state establishes ranking criteria to prioritize the enrollment of working grasslands. The ranking criteria consider threats of conversion, including cropping, invasive species, urban development, and other activities that threaten plant and animal diversity on grazing land. Payments and Conditions Under GRP contracts, participants voluntarily limit future use of enrolled land while retaining the right to conduct common grazing practices. Participants can produce hay, mow, or harvest for seed production (subject to certain restrictions during the nesting season of bird species that are in significant decline or those that are protected under federal or state law); conduct fire rehabilitation; and construct firebreaks and fences. GRP contracts and easements prohibit the production of crops (other than hay), fruit trees, and vineyards that require breaking the soil surface and any other activity that would disturb the surface of the land, except for appropriate land management activities included in a conservation plan. There are several types of payment arrangements under the program. Permanent Easement. This easement applies to the enrolled land in perpetuity. Easement payments for this option equal the fair market value, less the grassland value of the land encumbered by the easement. These values are determined using an appraisal. Thirty-year Easement. USDA provides an easement payment equal to 30 percent of the fair market value of the land, less the grassland value of the land encumbered by the easement. Rental Agreement. Participants may choose a 10, 15, 20, or 30-year contract. USDA provides annual payments in an amount that is not more than 75 percent of the grazing value of the land covered by the agreement for the life of the agreement. Restoration agreement. Restoration agreements are only authorized to be used under GRP in conjunction with easements and rental agreements provided under the program. Participants are paid upon certification of the completion of the approved practice. The combined total cost-share provided by federal or state governments may not exceed 100 percent of the total actual cost of the restoration project. Summary of Selected Survey Responses The following responses for incentives, disincentives, and suggestions for addressing disincentives to participating in USDA conservation programs for the benefit of threatened and endangered species and their habitats are those that were most frequently identified for GRP by the officials and landowners we surveyed. These responses may differ slightly than those identified in the body of this report because, in the report, we only include the responses that were identified most frequently across the majority of the six programs we reviewed. The most frequently cited incentives for participation in GRP included: (1) financial; (2) program criteria that give greater consideration to projects that directly address threatened and endangered, and other at-risk species; and (3) a personal interest in conservation. The most frequently cited disincentives for participation in GRP included: (1) limited funding for both the program and participants, (2) fears about government regulations, (3) restrictive eligibility and participation requirements, and (4) burdensome paperwork requirements. Suggestions most frequently identified to address disincentives for GRP participation included: (1) increasing funding and (2) providing greater education and outreach. Appendix VII: Wetlands Reserve Program Introduction The Wetlands Reserve Program (WRP) is administered by USDA’s NRCS and authorizes the agency to provide technical and financial assistance to eligible landowners to restore, enhance, and protect wetlands. WRP was first authorized under the Food, Agriculture, Conservation and Trade Act of 1990, and was later reauthorized and amended in the Farm Security and Rural Investment Act of 2002. The program has an acreage enrollment limit rather than a funding limit. The 2002 act authorized up to 2,275,000 acres to be covered under WRP and, as of September 2004, over 7,800 projects on nearly 1.5 million acres were enrolled in the program. WRP is available in all 50 States and the District of Columbia. Eligibility To be eligible for WRP, land must be capable of restoring wetland functioning and be able to provide wildlife benefits. Eligible types of lands include farmed wetlands, riparian areas, lands adjacent to protected wetlands that contribute significantly to wetland functions and values, and previously restored wetlands that need long-term protection. Lands that are expressly ineligible for funding under WRP include lands converted to wetlands after December 23, 1985; lands with timber stands established under a Conservation Reserve Program contract; federal lands; and lands where conditions make restoration impossible. In general, to be eligible for funding under GRP, landowners must have owned the land for at least 12 months prior to enrolling it in the program (unless the land was inherited), exercised the landowner’s right of redemption after foreclosure, or, if the land was purchased within 12 months of a WRP application, must have proven that the land was not obtained for the purpose of enrolling it in the program. Individuals or entities that have an average adjusted gross income exceeding $2.5 million for the three tax years immediately preceding the year a WRP contract is approved are not eligible to receive program benefits or payments under the program unless at least 75 percent of the adjusted gross income is derived from farming, ranching, or forestry operations. Application Process Landowners may file an application for a conservation easement or a cost- share restoration agreement with USDA under WRP at any time. Applications can be filed in person at a USDA office or electronically, and applicants must have a copy of the easement deed and other forms necessary for the transfer of land rights. USDA carries out activities associated with recording the easement in the local land records office, including recording fees, charges for abstracts, survey and appraisal fees, and title insurance. Selection Process NRCS evaluates each application and makes site visits to assess a proposed project’s technical and biological merits. The applications are ranked according to criteria based on broad national guidelines. NRCS state offices make decisions about which applications to accept. NRCS state conservationists have the authority to accept projects outside of this ranking process if they occur in “special project” areas, such as specific geographic areas that the state conservationist has identified. This enables NRCS to fund wetlands projects in areas that have been determined important for wetland restoration activities, regardless of individual application ranking scores. Payment and Conditions Under WRP contracts, participants voluntarily limit future use of enrolled land while retaining ownership. There are several types of payment arrangements under the program. Permanent Easement. This is a conservation easement in perpetuity. Payments for permanent easements are done annually and are equal to whichever is lower—the agricultural value of the land, an established payment cap, or an amount offered by the landowner. In addition to paying for the easement, USDA pays 100 percent of the costs of restoring wetland functioning. 30-Year Easement. Easement payments through this option are up to 75 percent of what would be paid for a permanent easement, including up to 75 percent of restoration costs. Restoration Cost-Share Agreement. Under this type of agreement, landowners commit to restoring degraded or lost wetland habitat, generally for a minimum of 10 years, without signing an easement agreement. USDA pays up to 75 percent of the cost of the restoration activity. Summary of Selected Survey Responses The following responses for incentives, disincentives, and suggestions for addressing disincentives to participating in USDA conservation programs for the benefit of threatened and endangered species and their habitats are those that were most frequently identified for WRP by the officials and landowners we surveyed. These responses may differ slightly than those identified in the body of this report because, in the report, we only include the responses that were identified most frequently across the majority of the six programs we reviewed. The most frequently cited incentives for participation in WRP included: (1) financial; (2) a personal interest in conservation; and (3) program criteria that give greater consideration to projects that directly address threatened and endangered, and other at-risk species. The most frequently cited disincentives for participation in WRP included: (1) burdensome paperwork requirements, (2) fears about government regulations, (3) limited funding for both the program and participants, (4) restrictive eligibility and implementation requirements, (5) potential for participation in the program to hinder current and/or future agricultural production, and (6) length of the required contract. Suggestions most frequently identified to address disincentives for WRP participation included: (1) increasing funding, (2) providing greater education and outreach, and (3) increasing flexibility in program eligibility and participation. Appendix VIII: Wildlife Habitat Incentives Program The Federal Agricultural Improvement and Reform Act of 1996 authorized USDA’s NRCS to work with landowners to develop wildlife habitat on their property through the Wildlife Habitat Incentives Program (WHIP). Through WHIP contracts, NRCS provides technical advice and financial assistance—through cost sharing on conservation projects—to landowners and others to develop upland, wetland, riparian, and aquatic habitat areas on their property. Although the primary purpose of WHIP is wildlife habitat development and enhancement, practices installed as a result of WHIP funding are often beneficial to farming and ranching such as actions to control invasive species, stabilize streambanks, and re- establish native vegetation. In fiscal year 2005, USDA provided more than $34.3 million in financial assistance, and enrolled approximately 458,000 acres in over 3,300 WHIP agreements. WHIP participants may also receive financial and other assistance from other entities such as state and local government agencies, conservation districts, and private organizations. In fiscal year 2005, partners contributed almost $10 million to help WHIP participants establish wildlife practices on enrolled lands. Eligibility WHIP is available in all 50 states. To be eligible, an entity must own or have control of the land that is to be enrolled in the program for the duration of the contract. Lands may be privately owned; federally owned, if the primary benefit of the proposed project will be to private or tribal land; tribal land; or, in some cases, state and locally owned land. Lands that are already enrolled in some of the other USDA conservation programs are generally not eligible for WHIP. Application Process Applicants may apply for WHIP at any time, through a continuous sign-up process. Selection Process NRCS selects applications based on criteria that are developed pursuant to each state’s WHIP implementation plan, which identifies wildlife habitat needs, and national priorities. NRCS state offices develop these plans with assistance from their respective state technical committees. Ranking criteria give priority to projects that will protect habitat or species of national or regional significance, or address needs in a state’s WHIP plan. If land is determined to be eligible, NRCS also places an emphasis on enrolling land in habitat areas where wildlife species are experiencing declines or have significantly reduced populations, and where state and local partners and Indian Tribes have identified important wildlife and fishery needs. NRCS also emphasizes projects that include practices that are beneficial to fish and wildlife, but may not otherwise be funded. Payments and Conditions NRCS provides cost-share payments to landowners that are generally between 5 and 10 years in length depending on the practices installed. NRCS provides these payments to landowners who agree to adopt certain conservation practices, including land management practices (e.g., timber stand improvement to improve forest health); vegetation practices (e.g., planting native grasses to provide wildlife habitat); and structural practices (e.g., fencing to keep livestock out of streams). NRCS may provide up to 75 percent of the cost of installing practices. NRCS will provide greater cost-share payments for landowners that sign 15-year contracts and undertake habitat development practices on essential plant and animal habitat. Partners, including public agencies, nonprofit organizations and others, may also assist through providing cost-share dollars, supplying equipment, or installing practices for the participants. Summary of Selected Survey Responses The following responses for incentives, disincentives, and suggestions for addressing disincentives to participating in USDA conservation programs for the benefit of threatened and endangered species and their habitats are those that were most frequently identified for WHIP by the officials and landowners we surveyed. These responses may differ slightly than those identified in the body of this report because, in the report, we only include the responses that were identified most frequently across the majority of the six programs we reviewed. The most frequently identified incentives for participation in WHIP included: (1) financial, (2) a personal interest in conservation, (3) program criteria that give greater consideration to projects that directly address threatened and endangered species and other at-risk species, and (4) the ability to receive technical assistance. The most frequently identified disincentives for participation in WHIP included: (1) limited funding for both the program and participants, (2) fears about government regulations, (3) burdensome paperwork requirements, and (4) restrictive eligibility and implementation requirements. Suggestions most frequently identified to address disincentives for WHIP participation included: (1) increasing funding and (2) providing greater education and outreach. Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Ulana Bihun, John Delicath, John Johnson, Richard Johnson, Jean McSween, Leslie Pollock, and Aaron Shiffrin made key contributions to this report.
Authorization for several conservation programs administered by the U.S. Department of Agriculture (USDA) expires in 2007, raising questions about how these programs may be modified, including how they can better support conservation of threatened and endangered species. Private landowners receive funding under these programs to implement conservation projects directed at several resource concerns, including threatened and endangered species. In this report, GAO discusses (1) stakeholder views on the incentives and disincentives to participating in USDA programs for the benefit of threatened and endangered species and their suggestions for addressing identified disincentives and (2) coordination efforts by USDA and the U.S. Fish and Wildlife Service (FWS) to benefit threatened and endangered species. In performing this work, GAO conducted telephone surveys with a nonprobability sample of over 150 federal and nonfederal officials and landowners. As might be expected, survey respondents most frequently identified receiving payments as the primary incentive for landowners to participate in USDA conservation programs for the benefit of threatened and endangered species or their habitats. The other most frequently identified incentives were program evaluation criteria that give projects directly addressing threatened or endangered species greater chances of being funded by USDA and landowners' personal interest in conservation. Relatedly, limited funding for programs overall and for the amount available to individual landowners was the most frequently identified disincentive to participation in USDA's programs. Fears about federal government regulations, paperwork requirements, participation and eligibility requirements, and the potential for participation to hinder current or future agricultural production were the next most frequently identified factors limiting participation. Survey respondents most frequently suggested increasing funding, improving education and outreach, streamlining paperwork requirements, and allowing more flexibility in program participation and eligibility requirements as ways to address program disincentives to participating in USDA's programs for the benefit of threatened and endangered species. Respondents indicated that educating and reaching out to more landowners may address a number of identified disincentives, including the fear of government regulations. For some disincentives, however, respondents noted that, while addressing them might entice more people to participate in the programs, it would not necessarily benefit threatened and endangered species. For example, some respondents suggested loosening requirements on the size of buffer strips in riparian areas, but others noted that doing so might harm certain species that are dependent on riparian areas for habitat. Much of the coordination between USDA and FWS for the benefit of threatened and endangered species occurs at their state and local offices, and is largely driven by the personal motivation of the staff involved. The types of coordination efforts that occur include sharing technical and financial assistance for implementing conservation projects, simplifying regulatory compliance procedures, assisting with special conservation projects, and participating on agency advisory groups. Agency officials noted that successful coordination is largely driven by individuals who have a strong commitment to coordinate, good interpersonal skills, and a willingness to work with others. Officials also recognized, however, that the quality of working relationships and the frequency of coordination between USDA and FWS staff varies considerably by location. To help improve working relationships and coordination, USDA and FWS have developed a draft memorandum of understanding that includes actions such as sharing information on imperiled species and streamlining regulatory processes. While the draft memorandum is a positive step toward strengthening coordination, it does not clearly articulate how these efforts are to be monitored and reported on to ensure that the intended goals are achieved and that coordination is sustained.
Background Warranty and Guaranty Mechanisms in Navy and Coast Guard Shipbuilding Acquiring a ship is a complex task and it is expected that, to a certain degree, parts will break or welds may fail after a ship is delivered. Defects related to the welding, fabrication, electrical, piping, and propulsion systems on the ship that are the result of construction issues are typically the shipbuilder’s responsibility to fix as opposed to problems with systems, such as weapon or complex information technology systems, which are purchased separately and, often, are the government’s responsibility to correct. Warranties and guarantees are both mechanisms to address the correction of shipbuilder-responsible defects, but they differ in key ways. Warranty provisions are outlined in the FAR, while the Navy sets forth its own guaranty provisions. In the case of the Coast Guard’s NSC, program officials told us they adopted the Navy’s guaranty provisions; therefore we refer to the guaranty in this report as “the Navy’s guaranty” even though it also pertains to the NSC. Shipbuilding contracts can either use a warranty, a guaranty, or have no mechanism to address defects after delivery. Table 2 shows key elements of warranties and guarantees. In Navy and Coast Guard shipbuilding, the warranty or guaranty period does not overlap with ship deployments but typically occurs while the crew is conducting tests and trials, and also performing any additional construction (such as modifications), among other activities. Generally, the warranty or guaranty concludes with a ship’s final contract trials. Figure 1 highlights where the warranty or guaranty period resides in the typical Navy and Coast Guard shipbuilding process. While there are some nuances, the basic process of adjudicating a warranty or guaranty claim is the same for Navy and Coast Guard programs. Figure 2 illustrates this process from the beginning—when a sailor identifies a problem—through the end when the government utilizes the warranty or guaranty to fix the defect. While the focus of this report is on warranties and guarantees, these are not the only mechanisms intended to address shipbuilder-responsible defects after delivery. In some cases, the Navy and Coast Guard insert a latent defect contract clause that provides for the correction of deficiencies that could not be discovered through reasonable inspection when the ship was delivered. At final acceptance, prior to delivery to the fleet, the Navy and Coast Guard acknowledge that the ship conforms to all quality requirements, with the exception of latent defects, fraud, or gross mistakes amounting to fraud. Further, there is also a separate process for deficiencies pertaining to fraudulent work or parts, such as counterfeit parts. Contract Types Used to Purchase Navy and Coast Guard Ships The Navy and Coast Guard use three primary contract types when purchasing ships—fixed-price type, incentive type, and cost- reimbursement type contracts. The first ships of a class, called lead ships, are often purchased with cost-reimbursement type contracts under which the government generally bears the risk of cost, schedule, or ship performance problems. After the first few ships in a class, the Navy and Coast Guard generally use firm fixed-priced or fixed-price incentive contracts because, as more ships are built, there is greater certainty about costs and performance. The following is a brief description of each contract type used to construct and repair the Navy and Coast Guard ships we reviewed: Firm Fixed-Price Contract—The government agrees to purchase a ship for a set price. The shipbuilder bears the full responsibility for increases in the cost of construction and earns a larger profit if actual costs are below the contract price. When using this contract type, the cost of the warranty or guaranty can be included in the construction price of the vessel or priced separately for an agreed-upon amount. In the government, this contract type is usually used for mature shipbuilding efforts because it works best when the ship buyer and shipbuilder are confident in the cost of ship construction. Fixed-Price with Economic Price Adjustment (EPA) Contract— Similar to a firm fixed-price contract, the government and the shipbuilder agree to a set price for the ship, but in a fixed-price EPA contract the government and shipbuilder agree to adjust, upward or downward, the stated contract price upon occurrence of specified contingencies, such as changes in costs of labor or material that the shipbuilder experiences during contract performance. For example, changes to the price of steel could be accounted for using this contract type. Fixed-Price Incentive Contract—Under fixed-price incentive contracts, the government and the shipbuilder agree upon a target cost, target profit, ceiling price, and a profit adjustment formula referred to as a share line. The government and the shipbuilder share, in accordance with the share line, responsibility for cost increases or decreases compared to the target cost, up to the ceiling price at which point the shipbuilder is responsible for all remaining costs. Generally, the share line functions to decrease the shipbuilder’s profit as its actual costs exceed the target cost. Likewise, the shipbuilder’s profit increases when actual costs are less than the target cost for the ship. In an illustrative example, typical of ships in our review, if a contract has a 70/30 share line and the shipbuilder’s actual costs were over the target cost by $1 million; the government would pay 70 percent ($700,000) of the additional costs needed to complete the ship and the shipbuilder would absorb 30 percent of the cost overrun ($300,000) as a reduction to profit. Cost-reimbursement Contract—Cost-reimbursement type contracts provide for the government to pay the shipbuilder’s costs to the extent specified in the contract and may include an additional fee (profit). As opposed to a fixed price type contract that requires the shipbuilder to deliver a ship for the price specified in the contract, under a cost- reimbursement type contract the shipbuilder agrees to use its best efforts to perform the work within the estimated costs. But the government must reimburse the builder for its allowable costs regardless of whether the work is complete. These contracts can also include a guaranty clause. In the case of the Coast Guard’s NSC, the first three ships were built using cost-reimbursement type contracts while the remaining five are being built with fixed-price incentive contracts. After ship delivery, the Navy may award separate cost- reimbursement type contracts to complete post-delivery activities, such as installing weapon systems and repairing defects. Table 3 shows the contract and vessel type for each Navy and Coast Guard ship we reviewed. As discussed earlier, we selected ships that were constructed after at least one other ship in the class. Navy’s Guaranty Mechanism Generally Has No Effect on Improving Cost and Quality Outcomes, in Contrast to FRC and Commercial Warranties The Navy and Coast Guard paid shipbuilders to repair shipbuilder- responsible deficiencies after delivery for most of the ships that we reviewed. In the four case study ships that used a fixed-price incentive contract with a guaranty, the Navy and Coast Guard paid the shipbuilder to build the ship as part of the construction process, and then paid the same shipbuilder a second time to repair the ship when defects were discovered after delivery. Navy contracting officials stated that the Navy accepts the costs of fixing deficiencies to lower the overall purchase price of its ships. However, this contracting approach results in the shipbuilder profiting from fixing deficiencies on a ship that it was initially responsible for delivering to the government in a satisfactory condition. In contrast, commercial ship buyers and the Coast Guard—in the case of the FRC— used warranties combined with firm fixed-price or fixed-price with EPA contracts, respectively, to lower the ultimate cost of the ship while also improving the ship’s quality. The Coast Guard’s experience with the FRC is akin to outcomes on large commercial ships. Buyers of these ships told us that they include a warranty that holds the shipbuilder financially responsible for correcting deficiencies following delivery and improves ship quality through a variety of strategies. Contract Type and Terms Dictate the Degree to Which the Government Pays for Defects after Delivery For the six government ships we reviewed, the type of contract and terms used to purchase the ship determined the degree to which the government or the shipbuilder paid for shipbuilder-responsible deficiencies after delivery. Figure 3 shows the percentage of shipbuilder- responsible defects that the government paid to correct, compared to those absorbed by the shipbuilder. As shown in figure 3, the government paid 86 percent of the costs associated with fixing defects that were attributed to the shipbuilder for all of the ships in our review, which we calculated as $6.4 million based on available information. However, the full extent of what the government paid to correct shipbuilder-responsible defects across all of our case study ships is not known because the data did not identify whether or not the shipbuilder or the government was responsible for each defect. These data issues are addressed below, and appendix I contains more detail. Also, as discussed below, the breakdown of who pays for shipbuilder- responsible defects is different depending on the contract type and terms for the ships. Under Fixed-Price Incentive Contracts, Government Pays Additional Costs and Shipbuilders Earn Profit to Correct Defects For the four fixed-price incentive contracts in our review, the government paid for almost all shipbuilder-responsible deficiencies found after delivery. For example, on LPD 25, the ship’s exterior hull paint began to peel shortly after delivery. The Navy determined that the shipbuilder did not adequately prepare the surface of the ship prior to applying a second coat of paint and submitted the issue as a guaranty claim. The Navy docked the ship and the shipbuilder re-painted the vessel. The shipbuilder submitted invoices for the work completed and the Navy paid the shipbuilder $315,000—even though the shipbuilder was responsible for the failure. This example illustrates how a guaranty functions with a fixed- price incentive contract type, which results in the government paying the costs to correct problems. As shown in figure 4 the government paid for 89 percent of the costs to correct shipbuilder-responsible defects for the ships in our review that had fixed-price incentive contracts, which we calculated as $4.9 million based on available information. As noted above, however, this figure does not include all shipbuilder-responsible defects for these four ships due to the data issues we discuss in the report. For the fixed-price incentive contracts in our review, the government’s share of the costs to correct shipbuilder-responsible defects is determined by 1. the share line up to the contractor’s limitation of liability; or 2. in the case of NSC 4, a separate cost-reimbursement type line item in the construction contract not subject to the share line calculation; and, in many cases, 3. follow-on contracts, or modification to existing contracts, to pay for remaining shipbuilder-responsible deficiencies once the limitation of liability was reached. We elaborate on these three scenarios and how they impact the guaranty calculations for each ship below. The share line determines the government’s costs up to the contractor’s limitation of liability: For two of the three Navy ships in our review with fixed price incentive contracts, the Navy paid for the guaranty work up to the contractor’s limitation of liability for correction of defects—which was initially $1 million or less for all of the Navy ships with fixed-price incentive contracts in our review. A key point is that, for these ships, the government’s share of the payments for the correction of shipbuilder- responsible defects depends upon the overall cost performance of the shipbuilder. While the specific share line calculations differ by ship, for Navy ships with fixed-price incentive contracts, guaranty costs, up to the limitation of liability, are included in the overall target cost of the ship. As such, when actual construction costs are above the target costs, the actual price of the ship to the government increases and the shipbuilder’s profit decreases according to an agreed upon percentage determined by the share line. This means that the government pays additional costs for every dollar over the target cost and, likewise, the shipbuilder absorbs its share of the cost overrun through a reduction in its target profit. In the case of LPD 25 for example, the contract initially included $1 million of guaranty work (the limitation of liability). Actual costs for LPD 25 construction exceeded the target cost by 32 percent, which, according to the share line, reduced the shipbuilder’s profit by 12 percent for the whole ship (including the guaranty work). This means that for the guaranty work up to the initial $1 million limitation of liability, the shipbuilder lost $120,000 in profit and the government paid $880,000 of the first $1 million of shipbuilder-responsible defects discovered after delivery. If actual costs for the ship were lower than the target cost, the opposite scenario would occur—meaning the shipbuilder would earn additional profit for correcting errors determined to be its responsibility. A separate cost-reimbursement type line item in the construction contract not subject to the share line calculation determines the government’s costs: In the case of the NSC 4, a separate cost-reimbursement type line item in the construction contract determined the Coast Guard’s share of the guaranty costs for NSC 4. Unlike the Navy’s approach, the Coast Guard’s guaranty on NSC 4 was purchased as a separate cost- reimbursement type line item in the construction contract and thus was not subject to the share line calculation. Using this method, the Coast Guard paid the shipbuilder $588,000 to fix claims under the guaranty, which included a fixed-fee (profit) amount. However, contracting for deficiency-correction on a separate line item allowed the Coast Guard to track deficiency claims and payments by specifically denoting it as guaranty work, a practice that could increase the data available to improve agency assessments about shipbuilder-responsible defects. According to Navy contracting officials, the Navy has not considered the pros and cons of reflecting guaranty work as a separate line item in shipbuilding contracts— potentially missing an opportunity to improve incentives and increase transparency. Follow-on contracts, or modification to existing contracts, to pay for remaining shipbuilder-responsible deficiencies once the limitation of liability was reached: For the three Navy ships with fixed-price incentive contracts in our review, costs associated with shipbuilder-responsible deficiencies exceeded the initial limitation of contractor’s liability. Navy officials told us, in most cases, they do not increase the limitation of contractor’s liability under construction contracts to cover shipbuilder-responsible defects because it is generally more expensive—due to the share line arrangement—than paying the shipbuilder to correct deficiencies under a separate contract. In general, to pay for the remaining shipbuilder-responsible deficiencies beyond the contractor’s limitation of liability, the Navy awards follow on contracts to the shipbuilders. Navy officials stated that these follow-on contracts could be used exclusively to pay for correcting the deficiencies, but—with respect to LCS 3 and 4—these contracts were also used for other purposes. In the case of NSC 4, the government did not reach the $1 million shipbuilder’s limitation of liability primarily because the Coast Guard chose to use other contracting mechanisms to address a majority of shipbuilder-responsible defects. For each case study ship with a fixed-price incentive contract, we determined the total amount the government had to pay to correct shipbuilder-responsible defects: LPD 25—The shipbuilder exceeded the ceiling price of the vessel, making it responsible for all additional costs to complete the ship after that point. Under the contract, the contractor’s limitation of liability for correction of defects was initially $1 million, but the government found $4.8 million in deficiencies. The Navy modified the contract to increase the contractor’s limitation of liability to $4.8 million to cover the costs for correcting all shipbuilder-responsible deficiencies. In increasing the contractor’s limitation of liability, the Navy also increased the contract’s target cost and ceiling by $3.8 million each, since $1 million was included in the initial target cost. In addition, the Navy increased the contractor’s target profit. These changes resulted in a slight improvement in the shipbuilder’s cost performance and reduced the amount of profit that the shipbuilder lost. While the shipbuilder earned less profit than it would have earned had the ship been delivered at its target cost, it still earned some profit. Based on our analysis of the final cost of the ship and how the share line applied to the guaranty work, we found that the shipbuilder was responsible for $578,000 of the $4.8 million in guaranty claims under the construction contract. LCS 3 and 4—According to Navy contracting officers, the Navy negotiated lower construction costs for these vessels by reducing the limitation of contractor liability associated with the guaranty from $1 million to $100,000 and $0, respectively. Given the low or non-existent limitation of contractor’s liability for guaranty work, the Navy paid for almost all of the shipbuilder-responsible deficiencies discovered after delivery using cost-reimbursable orders under basic ordering agreements. However, these agreements were used for many purposes, including correcting defects on the installation of government purchased systems and completion of construction work. The Navy does not separately track guaranty work when using these basic ordering agreements. As a result, we could not determine the total amount of claims that were the result of shipbuilder-responsible defects. For LCS 3 and LCS 4, the Navy spent $46 million and $77 million, respectively, under these post-delivery agreements to correct defects, complete ship construction, and assist with tests and trials, among other tasks. NSC 4—Because the guaranty on NSC 4 is a separately priced cost- reimbursement type contract line item from the construction line, it was not subject to the construction share line. The guaranty line item provides for the Coast Guard to pay for guaranty claims on a cost- reimbursable basis, up to the contractor’s limitation of liability for correction of defects, which was $1 million. In addition to the $588,000 paid by the Coast Guard under the guaranty, Coast Guard program officials also told us that a majority of the shipbuilder-responsible defects were corrected through other contracting mechanisms or Coast Guard units, particularly if these solutions were less expensive. However, the Coast Guard does not track the costs associated with the shipbuilder-responsible defects paid through these other means. Although the Navy uses profit or fee as an incentive to encourage cost efficiency, the Navy’s guaranty, when used with a fixed-price incentive construction contract, results in the shipbuilder earning profit or fee from correcting its own mistakes. For ships with fixed-price incentive contracts, we found that, included in the costs the government paid to correct shipbuilder-responsible defects, the shipbuilders earned between 1 and 10 percent profit or fee under the construction contract or under a follow- on arrangement to correct the defects determined to be their responsibility for both guaranty claims and follow-on work to correct the defects. Shipbuilders earn profit under the construction contract, and also earn a fee (profit) to correct remaining deficiencies under any applicable follow-on arrangement. According to the FAR, incentives should motivate contractor efforts towards efficiency and improve contractor cost performance. Navy contracting officials stated that shifting additional cost burden or reducing the shipbuilder’s profit for correcting defects will result in more expensive ships, as the shipbuilder will shift this additional risk into higher target costs for the ships. However, the Navy has not assessed different methods to change the terms of the guaranty in the contract with regards to paying for deficiencies. For example, it has not assessed the pros and cons of structuring contracts to prevent shipbuilders from earning profit from guaranty work; breaking the guaranty out as a separate line item not subject to the share line, or revisiting the limitations of liability amounts. Without reassessing its practice of allowing the shipbuilder to earn profit for correcting shipbuilder-responsible defects, the Navy may be missing opportunities to improve incentives and lower costs to the taxpayer. Under Other Fixed-Price Contracts, Shipbuilder Pays to Correct Defects, but Value of Guaranty or Warranty Depends on Contract Terms For the two ships we reviewed that used a firm fixed-price and fixed-price with EPA contract, the shipbuilder, rather than the government, paid for shipbuilder-responsible deficiencies after delivery, but the value of the warranty or guaranty was different for each ship. The Coast Guard paid $629,315 at contract award for the warranty on the Coast Guard’s sixth FRC. Under the terms of the warranty, the Coast Guard could require the shipbuilder to repair an unlimited amount of shipbuilder-responsible defects within the first year after delivery, and the shipbuilder was responsible for all costs to do so. To date, the Coast Guard has claimed over $1.5 million worth of repairs under the warranty for the ship. Thus, the warranty provided a 145 percent return on investment. In the case of the DDG 112, the Navy included a guaranty with a $5 million limitation of liability. The DDG 112 contract was originally fixed- price incentive but was later modified to a firm fixed-price contract. As a result of this modification, the Navy did not change the guaranty terms or limitation of liability amount, but the modification of the contract type altered responsibility for paying for guaranty work. Instead of the Navy paying the shipbuilder its costs (which could include profit) subject to the share line—as would have been done under the fixed-price incentive contract type—the shipbuilder now had to absorb the costs for the correction of up to $5 million in deficiencies. However, the Navy made a total of $459,000 in guaranty claims by the time the guaranty period expired, but paid the shipbuilder approximately $902,000—principally for the services of a shipbuilder guaranty engineer who rode with the ship’s crew for the 12 month guaranty period—to administer the guaranty. According to Navy program officials, the guaranty engineer, who works for the shipbuilder, provides value by scheduling repair, maintenance, and upgrade work and lowering guaranty claims since he or she works with the ship’s crew to correct problems as they occur without filing a formal claim. Nevertheless, taking into account the cost to administer the guaranty, it cost the government $443,000 ($902,000 paid for the guaranty engineer minus the $459,000 in claims). As a result, DDG 112’s guaranty resulted in a lower value than FRC 6’s warranty, particularly since the shipbuilder’s personnel to administer the warranty on the FRC are provided at no additional cost to the government. Figure 5 illustrates the costs borne by the government and shipbuilder to correct shipbuilder- responsible defects on the FRC 6 and the DDG 112 after delivery. Unlike Navy Guarantees, Warranties Used in Commercial Shipbuilding and FRC Program Improve Ship Quality Outcomes Commercial ship buyers and the Coast Guard, in the case of the FRC, demonstrate that warranties can be valuable in shipbuilding because they save the ship buyer money and improve the quality of the vessel. Commercial ship buyers and Coast Guard officials stated that warranties foster quality performance because the shipbuilder’s profit erodes as it spends money to correct deficiencies after delivery, during the warranty period. In assessing commercial and Coast Guard warranties in contrast with the Navy’s guaranty terms for the ships we reviewed, we found that for ships with warranties: (1) the shipbuilder corrected more deficiencies, (2) the value of the warranty was increased through extensions, (3) the government incentivized better terms from shipbuilder suppliers and class quality improvements and (4) a warranty engineer who worked for the ship buyer helped improve quality outcomes. Robust Defect Correction In general, commercial buyers, as well as the Coast Guard for the FRC, submitted more claims for correcting defects than the Navy did for its ships with guarantees. On average, commercial ship buyers told us that the number of warranty claims totals 1 to 2 percent of the construction value of the ship, but can range as high as 3 to 4 percent if key systems experience failures during the warranty period. For example, one commercial ship buyer had to take a ship out of operation for 2 months due to engine failures, which were repaired at the shipbuilder’s expense. Commercial ship buyers told us that even well-built and mature ships generally have claims totaling at least 1 percent of the cost of construction. For FRC 6, the Coast Guard’s program office estimates that the shipbuilder will pay about 4 percent of the construction cost of the ship to fix deficiencies. In comparison, the Navy has typically claimed significantly fewer construction defects on its ships following delivery. Based on our prior work, this situation is not because the Navy’s ships had fewer defects, as Navy ships are delivered to the government with numerous defects that must be corrected later. For the two Navy ships we reviewed that tracked guaranty claims (LPD 25 and DDG 112), the Navy, on average, made guaranty claims totaling less than one-tenth of a percent of the construction cost of the ship. For example, on DDG 112, the Navy submitted claims equal to 0.08 percent of the ship’s $602 million construction cost. Navy officials attribute the DDG 112’s low number of claims to its well understood design and construction because the shipbuilder has delivered 34 ships. In November 2013, we found that commercial ship buyers have more effective inspection practices than the Navy to discover and resolve quality issues prior to delivery. In addition, commercial buyers told us that even the most mature and well-built ships typically experience shipbuilder-responsible deficiencies after delivery that total about one percent of the vessel’s cost—more than 10 times as many deficiencies as were discovered and attributable to the shipbuilder on the DDG 112. The Navy had a similar low level of claims on LPD 25— equaling about one-third of one percent of the ship’s construction cost—a ship that had fewer predecessor ships, a history of quality problems, and exceeded its ceiling cost, which we discussed in our November 2013 report. Since the Navy pays the shipbuilder to correct deficiencies regardless of responsibility, there is less incentive than in commercial shipbuilding or the FRC program to discover every deficiency during the guaranty period. Further, the Navy may not file a guaranty claim for every shipbuilder- responsible issue if it decides that another contract mechanism is better suited to address the problem. According to several Navy and Coast Guard officials, the government does not always submit a guaranty claim to correct shipbuilder-responsible deficiencies, particularly in cases where a correction is not needed immediately or can be accomplished using a less expensive contracting arrangement. For example, according to these officials, if the fleet that maintains in-service ships has a contract in place that can purchase needed parts and services more cheaply, then the Navy or Coast Guard will likely use that contract instead of the guaranty. In these cases, the deficiency would not be documented as a guaranty or shipbuilder-responsible defect. We plan to examine the Navy’s defect correction process as a part of future work focused on the Navy’s shipbuilding practices after ship delivery. Extended Warranty Period for Repaired Defects Another critical aspect of commercial ship and Coast Guard warranties is that they contain provisions to extend warranties on parts replaced during the warranty period. On the FRC 6, the shipbuilder replaced a water heater 9 months into the 12-month warranty period, and the warranty period for the water heater was subsequently extended for an additional 12 months. Commercial companies extend the warranty on parts that break during the warranty period as well, in some cases capping the extensions at either 24 or 36 months. For most Navy ships, extending the guaranty would not provide additional coverage since the Navy, under a fixed-price incentive arrangement, would also pay for any defects found during the extension period. This is because the Navy’s practice for its fixed-price incentive contracts is to include the guaranty as part of the overall negotiated target cost for construction of the ship subject to the share line. Thus, an extended guaranty period would not decrease the Navy’s financial liability. Better Terms with Suppliers and Class-wide Quality Improvements In the case of the Coast Guard’s FRC warranty, shipbuilder representatives told us the terms force them to ensure that they receive the same warranty terms from their suppliers. This arrangement, in turn, can result in a better value for the government. Due to the length of time required to build a ship, supplies are no longer covered under their original warranties because parts are often purchased and installed a year or more before the ship is delivered to the government. To address this issue, shipbuilder representatives stated that they actively seek supplier warranties that mirror the Coast Guard’s terms and conditions to share the potential costs associated with the warranty. For FRC 6, the subcontractors are to conduct repairs accounting for about 40 percent of the warranty work, while the shipbuilder is responsible for the remaining warranty work. As an example of how this can benefit the government and the shipbuilder, on two earlier ships of the class—FRC 2 and 3—the Coast Guard submitted over $10 million worth of warranty claims. According to FRC program officials, these claims were primarily engine problems that were paid for by the engine supplier. As it turned out, these engine issues comprised more than 90 percent of the total amount of warranty claims on these ships. Likewise, a commercial ship buyer told us that their company prefers supplier warranties on key parts that extend beyond the shipbuilder’s warranty on the full vessel and work to negotiate these terms directly with the original equipment manufacturers. Thus, a significant portion of the claims have not fallen solely on the shipbuilder to fix, demonstrating how a warranty can provide value without unduly harming the shipbuilder’s profits. For instance, the shipbuilder is incentivized to negotiate extended warranties with its suppliers that match the duration of the warranty on the whole ship. However, for the Navy ships we reviewed, the shipbuilders do not have this same incentive since the Navy absorbs the risk of paying for defects. In addition, commercial ship buyers and the Coast Guard have received improvements and corrections to ships during construction as the result of defects found on previous ships in the class—at the shipbuilder’s expense. For example, representatives from a commercial company told us about a defect in the anchor of a ship that required a design change. The shipbuilder made the changes to the ship, at its own expense, during the warranty period and also absorbed the costs associated with changing the design and fixing other ships already in construction. The Coast Guard’s FRC warranty fosters quality performance by encouraging the shipbuilder to address class-wide issues efficiently through the construction process. For example, according to FRC program officials, when a quality problem is identified on a particular ship that impacts the entire class of FRC ships, the shipbuilder corrects the problem and works with the Coast Guard to back fit and forward fit other ships in the class at the builder’s expense to avoid a potential warranty claim regarding the problem in the future. Using its guaranty with a fixed-price incentive contract, the Navy would likely pay for all costs associated with such a change. Warranty Engineers Advocate for Ship Buyers, not Shipbuilders A final difference between commercial and Coast Guard ship buyers and the Navy is the use of a warranty engineer that represents the interests of the ship buyer. According to several commercial ship buyers we spoke with, the ship buyer’s warranty engineer identifies defects by reviewing failures that occurred on the ship, identifying them as a warranty issue, and submitting a warranty claim to the shipbuilder. The warranty engineer also ensures that the work is completed to the ship buyer’s satisfaction. One commercial ship buyer stated that, in theory, the most experienced and “crankiest” senior engineer in the fleet best suits the purpose of the warranty engineer. The Coast Guard’s warranty engineer performs similar functions on the FRC. In contrast, the Navy generally pays the shipbuilder to supply a guaranty engineer whose roles and responsibilities vary among the ships we reviewed. Navy program and contracting officials, for the ships we reviewed, stated that the shipbuilder’s guaranty engineer provides valuable services ranging from mentoring new ship crews to scheduling the remaining work that needs to be completed on the vessel. However, because the guaranty engineer works for the shipbuilder, he or she does not identify defects. Rather, the guaranty engineer works with the ship’s crew to primarily support the Navy in troubleshooting the issue and scheduling necessary corrective actions. These comparisons highlight differences between the Coast Guard (in the case of the FRC) and commercial buyers’ warranty practices versus the Navy’s guaranty practices. Objective of the Guaranty Is Unclear and Navy Has Not Fully Considered the Costs and Benefits of Using a Warranty per DOD Guidance The Navy’s guaranty lacks a clear objective, with insufficient guidance on when or how to use a guaranty. Without a clear objective, Navy contracting officials cannot properly implement the guaranty or assess its effectiveness. While use of a warranty per the FAR is not mandatory, the Navy has not examined whether or not a warranty is appropriate for the ships it purchases—a practice recommended in DOD guidance. Further, we found that the Navy’s data regarding ship deficiencies is not sufficient to estimate the total expected shipbuilder-responsible defects on its ships—critical for understanding what the benefits of a warranty could be. In contrast, the Coast Guard’s FRC program warranty is based on FAR principles. The FAR provides that: (1) the agency assesses whether or not a warranty is worth the cost, (2) the government may direct the contractor to repair or replace defective items at the contractor’s expense, and (3) a principal purpose of a warranty in a government contract is to foster quality performance. Unclear Objective and Sparse Guidance Make It Difficult to Assess Whether Low Guaranty Liability Limits Are Effective The Navy does not have a clear objective for using its guaranty, which makes it difficult for Navy contracting and program officials to implement the guaranty effectively. According to Standards for Internal Control in the Federal Government, government programs require objectives and guidance to ensure that a program is achieving the desired outcomes in a manner that effectively uses funding. Navy program officials provided various responses to us regarding the objective of the guaranty. For example, officials from several Navy program offices acknowledged that the guaranty clause is not a tool to improve ship quality. Rather, they noted that the guaranty clause helped expedite the procurement of needed parts after delivery by establishing a contractual relationship with the shipbuilder for all services and parts associated with the construction of the ship, as opposed to purchasing these items individually through other means. Because the guaranty is included in the construction contract, the guaranty clause allows the government to purchase parts and labor in the same manner and pricing as for ship construction. Other Navy program officials stated that the guaranty is useful because it involves purchasing the services of an experienced guaranty engineer, who helps schedule work that needs to be done and to mentor the ship’s crew—usually comprised of junior officers who are new to the ship at the time the ship is delivered. While these services can provide meaningful benefits, it is unclear whether or not the guaranty is the appropriate and most cost effective mechanism to accomplish these tasks, and the Navy has not assessed whether or not this is the case. Further, the Navy lacks sufficient instruction and data to guide contracting officers on how to correctly implement the guaranty to achieve an intended result. A senior Navy contracting official provided us with the available guidance for guarantees, which primarily consists of standard contract language used to establish a guaranty in ship contracts. The Naval Sea Systems Command Handbook also provides some instruction on how to execute the guaranty after a ship is delivered, but there is little guidance that focuses on specific contract terms to help contracting officers develop an effective guaranty. Navy contracting officials confirmed that they have historically used the existing guaranty on all ship purchases dating back 50 years without assessing the use of a warranty or whether or not the guaranty is effective. A senior Navy contracting official provided us with an assessment from 1980, but this assessment did not include any analysis about how the Navy’s guaranty provision compared to alternatives. In addition, without sufficient guidance, Navy contract officials have at times confused the guaranty with a FAR-based warranty. For example, in a recent contract memo, Navy contracting officials documented that a DDG 51 class vessel had a warranty based upon the FAR. Instead, this ship actually had a typical Navy guaranty, which is not based on the FAR, exemplifying that these officials did not have a clear understanding of the objective and process associated with the Navy’s guaranty provision. Moreover, according to internal control standards, government programs are required to provide guidance that instructs program managers about how to best accomplish the program’s objective that incorporates lessons learned from experience over time. Further, the Navy cannot measure the effectiveness of the guaranty mechanism because it lacks a clear objective. According to Standards for Internal Control in the Federal Government, measuring the effectiveness of a program is required to provide reasonable assurance that the program will achieve its designated objective and that these objectives align with the organization’s overall goals. For example, it is difficult to determine if the Navy’s practice of establishing low limitations of contractor’s liability for the correction of defects has a negative effect because the Navy’s objective for the guaranty is not clear. The Navy’s standard clause on limitation of liability states that the shipbuilder’s limitation of liability should be set at two percent of the target construction cost of the ship or an amount to be determined by the program. However, for the programs we reviewed, the Navy sets the limitation of liability significantly below the level suggested in its guidance. As noted, the limitation of liability for LPD 25 was initially set at $1 million—equal to 0.08 percent of the target price of the ship. Navy contracting officials told us that they keep the limitation of liability low because it allows the Navy to pursue other, less expensive contracting arrangements to fix the deficiencies, while at the same time extending the contractual relationship with the shipbuilder after delivery to provide for an expedited method for fixing any issues that arise shortly after delivery. If, on the other hand, as Navy contracting officials stated, the guaranty is meant to address shipbuilder-responsible deficiencies after delivery—while the guaranty period is still in effect—then a low limitation of liability is counter to this objective. Without a clear objective backed by instructive guidance, the Navy is not well-positioned to measure the effectiveness of the guaranty and then make improvements to it if necessary. FRC Warranty Adheres to Federal Acquisition Regulation While Navy Has Not Considered Using a Warranty The FRC’s warranty is based on principles set forth in the FAR that discuss the objective and effective implementation of a warranty. While the FAR asks agencies to consider the use of a warranty, it notes that the use of a warranty is not mandatory. The FRC’s warranty also adheres to Department of Homeland Security guidance regarding warranties, which implements the FAR provision on warranties. If an agency chooses to use a warranty, the FAR discusses principles for effectively using them to better ensure that the government derives benefits. In contrast, there is no FAR provision that pertains to the Navy’s guaranty, as it is an agency- specific contracting mechanism. Table 4 highlights the FAR principles compared with the Coast Guard’s warranty on the FRC and the degree to which the Navy’s guaranty mechanism demonstrated similar principles. The Department of Defense created a warranty guide in September 2009 that expands upon the principles in the FAR. For example, the guide requires documentation, in a warranty plan, of why a warranty is, or is not, appropriate for an acquisition. While not required to use a warranty, the Navy neither considers using a warranty for large ship programs nor documents how it came to the decision not to use one. According to Navy contracting officials, the Navy does not know how a shipbuilder would price a warranty and, ultimately, what the Navy would have to pay in comparison to its current guaranty structure. Moreover, the Navy cannot assess the potential benefits since it does not have a full understanding of the total amount of shipbuilder-responsible deficiencies on its ships after delivery. For example, we reviewed deficiency data documented by the ships’ crews after delivery and found that these data rarely differentiate between shipbuilder and government-responsible defects and often do not include the estimated cost to correct defects. By contrast, the Coast Guard documents each deficiency, the date the shipbuilder fixed the issue, the estimated cost of the issue, and any warranty extensions that apply. Navy contracting officials stated that shipbuilding is radically different from most Department of Defense acquisitions, in part because of the limited shipbuilding industrial base which they believe makes a warranty not cost effective for its ships. While using the warranty mechanism in a non-competitive ship purchase will likely require some customization and analysis, without assessing the value of using a warranty, the Navy cannot make an informed decision about which mechanism best protects the government from paying for shipbuilder- responsible defects after ships are delivered. Finally, Navy contracting officials did not think a warranty would work effectively with the fixed price incentive contracts typically used with the Navy’s large, complex, and often sole-source shipbuilding programs. However, the Coast Guard is planning to use a warranty with a fixed-price incentive contract on its forthcoming Offshore Patrol Cutter—a ship of similar size and complexity as the Navy’s LCS. In doing so, the Coast Guard plans to purchase the warranty as a separate firm-fixed price item in the construction contract, which should prevent the shipbuilder from earning profit for these corrections in accordance with the share line. According to Coast Guard program officials, the Offshore Patrol Cutter’s planned warranty will function similar to the FRC’s warranty. According to Coast Guard officials, using this warranty will ensure that the shipbuilder pays for shipbuilder-responsible defects even though the planned contract is a fixed-price incentive type contract. Further, structuring the warranty as a separate line item in the contract ensures that any profit earned by the contractor to construct the ship due to the share line will be completely separate from the warranty work. Conclusions Shipbuilding is a complex endeavor, and a certain amount of shipbuilder- responsible deficiencies can be expected after delivery. This is the case for the Navy and Coast Guard ships in our review and is also the case in the commercial shipbuilding industry. However, key differences exist in how these deficiencies are dealt with—specifically, whether the builder or the buyer pays for corrections. The value of a warranty or guaranty depends primarily on the government’s diligence in discovering defects after delivery (and ensuring the contractor corrects the defects), and the type and terms of the contract. Fixed-price incentive contract types used commonly in Navy shipbuilding, coupled with certain terms within the contract, result in the government paying to correct shipbuilder-responsible defects. Further, these contracting arrangements allowed the shipbuilder to earn profit from fixing deficiencies discovered after delivery. Partly because the guaranty is included in the target cost of the ship and part of the construction line item of the contract subject to the share line, the shipbuilder earns the same level of profit for correcting defects as it does for building the ship. In addition, this contracting strategy obscures the Navy’s ability to track payments and defects associated with the guaranty because guaranty claims are not differentiated from other costs. Further, the award of follow-on, cost-reimbursement arrangements to correct remaining defects—under which the contractor also earns fee (profit)—creates an apparent disincentive for quality ship construction. The Navy has no guidance that clearly explains the guaranty’s objective and, further, little instruction to contracting officers on how to implement the guaranty. Without an objective and associated guidance, the Navy is not well-positioned to ensure the effectiveness of the guaranty and associated policies. For example, it cannot know whether its strategy of including a low limitation of liability is effective. Although the FAR provides for consideration of certain factors in determining whether a warranty is appropriate for an acquisition and DOD requires documentation of why a warranty is or is not appropriate, the Navy has not considered or documented whether or not a warranty would be appropriate for its ships. Navy officials have set forth reasons why their historical approach lowers the overall construction cost of the vessel and results in a better deal than pricing a warranty. Their reasons could have merit. However, the Navy has no data on the full costs of correcting shipbuilder-responsible defects after delivery. Therefore, it is not positioned to know whether or not warranties would increase the price of its ships. Because the Navy has not historically used or considered warranties for its ships, it may be prematurely discounting their use as a mechanism to improve ship quality and cost. In this regard, there may be benefit in examining the practices of others, such as the Coast Guard, which has adopted a warranty approach for the FRC and plans to do so for its upcoming Offshore Patrol Cutter. Recommendations for Executive Action To improve the use of warranties and guarantees in Navy shipbuilding, we recommend that the Secretary of the Defense direct the Secretary of the Navy to take the following three actions: 1. In arrangements where the shipbuilder is paid to correct defects, structure contract terms such that shipbuilders do not earn profit for correcting construction deficiencies following delivery that are determined to be their responsibility. 2. Establish and document a clear objective for using a guaranty, and then create guidance for contracting officers that illustrates how to implement a guaranty that meets this objective. This guidance should describe how contracting officers should use aspects of the guaranty, including determining an appropriate limitation of liability, to achieve the objective and include considerations as to when a guaranty should be a separate contract line item. 3. For future ship construction contracts, determine whether or not a warranty as provided in the FAR, provides value and document the costs, benefits, and other factors used to make this decision. To inform this determination, the Navy should begin differentiating the government’s and shipbuilder’s responsibility for defects and track the costs to correct all defects after ship delivery. Agency Comments and Our Evaluation We provided a draft of our report to the Departments of Defense and Homeland Security for review and comment. In its written comments, which are reprinted in appendix II of this report, DOD partially concurred with all three of our recommendations, and stated the issues will be addressed in a study to be completed by September 2016. The Department of Homeland Security had no comments. With regard to the first recommendation, DOD agreed that guaranty terms should be reviewed, but disagreed that shipbuilding contracts always result in the shipbuilder earning profit to correct defects determined to be the shipbuilder’s responsibility. The latter point is not in contention, as our report found that the extent to which the shipbuilder earns profit for shipbuilder-responsible defects depends upon the type and terms of each ship construction contract. Nevertheless, as noted in the report, we found that for the four fixed-price incentive contracts in our review, the government paid for almost all shipbuilder-responsible deficiencies found after delivery. In its response, DOD stated that its policy changes, if any, should avoid resulting in higher overall costs to the government due to shipbuilder pricing associated with correcting defects. We agree; as we have found, in the case of the Coast Guard’s Fast Response Cutter and several private ship buyers discussed in this report, the use of a warranty did not increase the cost of the ship. For example, we found several ship buyers, in both government and private industry, that require extensive warranties when purchasing ships and often receive this coverage with little or no increase in cost. The Navy plans to conduct a study to further examine the details and determine what policy changes, if any, could be implemented to change the structure of non-cost reimbursable contracts. The Navy plans to coordinate its findings with DOD’s Defense Procurement and Acquisition Policy and complete the study by September 30, 2016. With regard to our second recommendation, DOD also agreed that the considerations underlying a decision to use a guaranty provision should be documented, and that formal policy guidance on implementing a guaranty would be helpful. DOD disagreed, however, that a single objective for using a guaranty would satisfy the needs of all potential shipbuilding programs. While we understand that each contract will require different warranty or guaranty contract structures and provisions, our recommendation was that the Navy define the basic purpose of its guaranty, which is currently lacking. As noted in the report, we found that different stakeholders within the Navy used the guaranty to accomplish differing and, at times, contradictory objectives. Without a clear, basic objective and associated guidance, the Navy is not well-positioned to establish provisions that ensure the effectiveness of the guaranty. The Navy plans to conduct a study to determine what policy, if any, and guidance changes are necessary to effectively implement warranty or guaranty provisions. Once again, the response states that the Navy will coordinate with DOD’s Defense Procurement and Acquisition Policy and complete the study by September 30, 2016. In its response to our third recommendation, DOD stated that the Navy’s planned study in response to our second recommendation will also consider whether policy changes would be beneficial. DOD noted, however, that the FAR already requires contracting officers to ensure that the benefits of using a warranty are commensurate with the costs and that the FAR prohibits warranties under cost-type contracts without authorization. We found no evidence during the course of our review that the Navy considers using a FAR-based warranty for its ship contracts. In addition, all of the ships in our review were fixed price-type contracts. DOD also disagreed with our finding that the Navy has not been tracking costs to correct deficiencies after delivery. As we state in the report, the Navy only tracks the costs of shipbuilder-responsible defects up to the limitation of liability, which is often set very low compared to the total number of defects. After the limitation of liability is reached, the cost to repair shipbuilder-responsible defects is not tracked. A final observation is that, in its response, DOD used the terms “warranty” and “guaranty” interchangeably. As we have found, however, these two mechanisms are very different and none of the Navy ships in our review used a FAR-based warranty. As the Navy and DOD move forward with the planned study and with implementing any changes to policy or guidance, it will be important to make this distinction clear. We are sending copies of this report to the Secretary of the Department of Defense, the Secretary of the Department of Homeland Security, the Secretary of the Navy, the Commandant of the Coast Guard, and appropriate congressional committees. In addition, the report is available on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or mackinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report assessed: (1) the extent to which the Navy and Coast Guard’s guaranty or warranty mechanisms reduce the government’s exposure to additional costs and improve the quality of basic ship construction, if any, as compared to commercial shipbuilding; and (2) the extent to which the Navy and Coast Guard use available acquisition regulation and guidance in implementing guarantees and warranties in shipbuilding programs. Our methodology for both objectives included reviewing the Navy and Coast Guard’s guaranty and warranty policies and procedures. To assess the implementation of these policies and procedures, we selected a non- generalizable case study analysis of six ships that encompassed the majority of all ship classes recently delivered to and accepted by the Navy and Coast Guard in the last five years. Although our sample is non- generalizable, the Navy told us that they use the same guaranty for each ship and, though there are variations in contract type that impact how the guaranty works, the guaranty mechanism that we reviewed is used for all ships. Most of the contracts for the ships we reviewed were also used to purchase other ships in the class. So while we focused on the six case studies, the contracts and guaranty clauses we reviewed covered more than 20 ships. The case studies were chosen to represent the majority of shipyards in the United States that build Navy vessels, including Austal USA in Mobile, Alabama; Bollinger Shipyards in Lockport, Louisiana; General Dynamics Bath Iron Works in Bath, Maine; Huntington Ingalls Industries Ingalls Shipbuilding in Pascagoula, Mississippi; and Marinette Marine Corporation in Marinette, Wisconsin. We chose to review ships that were purchased using fixed-price type and incentive-type contracts, since warranties and guarantees on cost-reimbursement type contracts reimburse the shipbuilder for deficiencies and these contracts are used for immature shipbuilding efforts and we wanted to look at the hull, mechanical, and electrical aspects of more mature ship construction efforts. We did not choose lead ships, since lead ships tend to be purchased using cost-reimbursement type contracts and usually have more and different quality issues than the rest of the class. As shown in table 5, the ships that met these criteria were the following: USS Michael Murphy (DDG 112) guided missile destroyer, USS Somerset (LPD 25) amphibious transport dock, USS Fort Worth (Littoral Combat Ship 3) and USS Coronado (Littoral Combat Ship 4). From the Coast Guard, we reviewed the following ships: USCGC Hamilton (National Security Cutter 4), and USCGC Paul Clark (Fast Response Cutter 6). Five of these ships had guarantees while the FRC was the only ship with a warranty. We also made observations about the FRC class of ships beyond just FRC 6. To identify the extent to which the Navy’s guaranty mechanism reduces the government’s exposure to additional costs resulting from defective workmanship or equipment, we analyzed the costs to repair deficiencies after delivery for the six Navy and Coast Guard case studies. To determine the amount paid by the Navy and the Coast Guard for the correction of deficiencies, we examined each ship’s contract and calculated the amount paid in accordance with the contract. In the cases where the ships were built using fixed-price incentive contracts, we calculated the costs paid by the government in the following manner: 1. We determined the estimated construction cost of the ship at completion based upon the government’s estimate at complete at the time the ship was delivered. Navy officials agreed that this is a reasonable estimation of what the ship will cost when the contract is officially closed out. The estimated construction cost at completion for LPD 25, LCS 3, LCS 4, and NSC 4 is the government’s estimate of the cost of the ships at the time the ship is delivered to the government. The actual final cost is determined several years later when the contract is closed. 2. If the share line applied, which it did for LCS 3 and LPD 25, we calculated each estimate at completion on the applicable portion of the share line to determine how much the government paid (cost and profit) for the portion of the ship to which the share line applied (in both cases this was most of the construction effort of which the guaranty is a very small segment). The share line did not apply to LCS 4 because the contractor’s limitation of liability for guaranty work was $0. In the case of NSC 4, the share line did not apply because the guaranty was on a non-construction line item that did not have a share line. Table 6 displays the share lines for LCS 3 and LPD 25. The share line applies to the guaranty work only until costs for guaranty work reach the limitation of contractor liability for correction of defects specified in each ship’s contract. 3. For the whole portion of the contract to which the share line applied, we calculated the added or reduced profit based upon the share line and the cost performance of the shipbuilder. 4. We then applied the cost performance of the shipbuilder under the share line to each portion of ship construction equally. For example, if a shipbuilder earned 10 percent profit on the portion of the ship construction to which the share line applied, we applied this profit percentage to the guaranty work. 5. We then combined what the government paid to correct defects per the share line with the amount, if tracked, of shipbuilder-responsible deficiencies the government paid for using other means such as post- delivery cost-reimbursement type arrangements. To understand the effect of the warranty or guaranty on ship quality, we reviewed documentation of defects after delivery, including trial cards, current ship maintenance project documentation, and data collected by shipbuilders. As reported, we found that these data were not reliable for our purposes of understanding the total amount the government paid to correct all shipbuilder-responsible defects. For example, the data did not contain cost information for all defects and did not identify whether or not the shipbuilder or the government was responsible for each defect. We also compared ships with guarantees to ships with warranties used in government and commercial shipbuilding programs to determine the extent to which each mechanism improves ship quality. We supplemented our information by interviewing Navy and Coast Guard program and contracting officials, as well as by visiting three U.S. private shipyards that build Navy and Coast Guard ships and spoke with shipyard representatives regarding these case studies. We also met with officials from several Navy offices responsible for building and delivering ships, including various Supervisor of Shipbuilding, Conversion and Repair (SUPSHIP) commands; Navy Sea Systems Command Contracting and Logistics directorates. We also met with officials from Navy and Coast Guard offices, such as the Program Executive Offices, and Coast Guard’s Project Residence Offices to gain a full understanding of the execution of a warranty and guaranty. To assess the extent to which the Navy and Coast Guard use available acquisition regulation and guidance in implementing warranties and guarantees, we reviewed and analyzed the ship construction contracts for each of the six case studies. We reviewed available guidance related to warranties and guarantees, including the FAR, the Department of Defense Warranty Guide, the SUPSHIP Operations Manual, and the Naval Sea Systems Command Contracts Directorate Book of Standard Component Clauses. We also reviewed federal standards for internal control related to designing control activities, and assessed cost estimating best practices guidance. Based on the available guidance, we developed a list of the principles and their objectives regarding the use of warranties and compared how these were implemented for the six case studies we reviewed. In addition, we participated in the government’s acceptance of FRC 13 from the shipbuilder (also known as ship delivery) in July 2015, which provided us with a better understanding of how the Coast Guard executes its warranty. Our methodology also included learning about key practices used by leading commercial ship buyer regarding the structure and implementation of warranties for newly constructed vessels. We compared the execution of the guaranty or warranty of our government ship case studies to the execution of warranties on commercial ships of similar size and construction complexity. Our comparisons with commercial shipbuilding focused on basic ship construction because the work is similar to hull, mechanical and electrical work completed on Navy and Coast Guard ships. To do so, we interviewed leading ship buyers from the cruise, oil and gas, and commercial shipping industries some of which also provided us with supporting documentation, such as contract clauses. For the purposes of this review, the leading commercial ship buyers we spoke with included a number of companies that we identified in our previous work as leaders in their industry in terms of being large operators of cruise ships, oil and gas vessels, or containerships, and that agreed to participate in our review, including: Carnival Corporation and Royal Caribbean Cruises, Ltd.; Chevron Inc; and A.P. Moller-Maersk A/S, respectively. We conducted this performance audit from December 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions, based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition the contact name above, the following staff members made key contributions to this report: Diana Moldafsky, Assistant Director; Laurier Fish; Joseph Franzwa; Ronald Freeman; Laura Greifner; Kristine Hassinger; Marie Suding; Roxanna Sun; and Abby Volk. Related GAO Products National Security Cutter: Enhanced Oversight Needed to Ensure Problems are Discovered during Testing and Operations are Addressed. GAO-16-148. Washington, D.C.: Jan. 12, 2016. Littoral Combat Ship: Navy Complied with Regulations in Accepting Two Lead Ships, but Quality Problems Persisted after Delivery. GAO-14-827. Washington, D.C.: Sept. 25, 2014. Coast Guard Acquisitions: Better Information on Performance and Funding Needed to Address Shortfalls. GAO-14-450. Washington, D.C.: June 05, 2014. Navy Shipbuilding: Opportunities Exist to Improve Practices Affecting Quality. GAO-14-122. Washington, D.C.: Nov. 19, 2013. Best Practices: High Levels of Knowledge at Key Points Differentiate Commercial Shipbuilding from Navy Shipbuilding. GAO-09-322. Washington, D.C.: May 13, 2009. Weapons Acquisition: Warranty Law Should Be Repealed. NSIAD-96-88. Washington, D.C.: June 28, 1996.
The U.S. government spends about $17 billion per year building ships to support national defense and homeland security. Defects often become evident shortly after a ship is delivered. Warranties and guarantees are both mechanisms to fix defects for which shipbuilders are responsible. Warranties give the government a contractual right to direct the correction of defects at the contractor's expense. Guarantees are Navy-specific contractual mechanisms that provide for the correction of defects; but unlike warranties are not covered in the FAR. The House report accompanying the National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to review warranties and guarantees in government shipbuilding programs. This report assesses the extent to which (1) warranties and guarantees reduce the government's exposure to additional costs and risks of poor quality and (2) how the Navy and Coast Guard use acquisition regulations and guidance to implement warranties and guarantees. GAO reviewed the Navy's and Coast Guard's guaranty or warranty practices and policies and selected six case studies, comprised of four Navy ships—representing ships built in the last five years—and two vessels the Coast Guard most recently purchased. For five of the six Navy and Coast Guard ships GAO reviewed, guarantees did not help improve cost or quality outcomes. While the type and terms of each contract determine financial responsibility for correcting defects, the government, in most of the cases GAO examined, paid shipbuilders to repair defects. For the four ships with fixed-price incentive type contracts and guarantee clauses, the government paid the shipbuilder 89 percent of the cost—including profit—to correct these problems. This means the Navy and Coast Guard paid the shipbuilder to build the ship as part of the construction contract, and then paid the same shipbuilder again to repair the ship when defects were discovered after delivery—essentially rewarding the shipbuilder for delivering a ship that needed additional work. Navy officials stated that this approach reduces the overall cost of purchasing ships; however, the Navy has no analysis that proves their point. In contrast, the warranty on another Coast Guard ship—the Fast Response Cutter (FRC)—improved cost and quality by requiring the shipbuilder to pay to repair defects. The Coast Guard paid upfront for the warranty, which amounted to 41 percent of the total defect correction costs. The figure below shows the amount, as a portion of the millions of dollars required to address defects, shipbuilders and the government paid to correct defects for the ships GAO reviewed and the difference in defect-correction arrangements. Although the Federal Acquisition Regulation (FAR) and the Department of Defense guidance instruct programs to, respectively, consider and document the use of a warranty, the use of warranties is not mandatory, and the Navy does not consider using them for ship contracts. In contrast, the Coast Guard's FRC warranty, as well as that planned for another upcoming ship class, fosters quality performance by following the FAR warranty provisions. The Navy may be missing opportunities for savings by not considering use of warranties. Further, the Navy has no stated objective for its guarantees, and guidance for contracting officers is minimal as to when or how to use a guaranty. While the FAR does not apply to guarantees, according to federal internal control standards, government programs require objectives and guidance to ensure that they achieve the desired results. Without a clear objective and guidance for using a guaranty and for determining when a warranty is appropriate in shipbuilding, Navy contracting officers do not have the information they need to make informed decisions regarding which mechanism is in the best interest of the taxpayer.
Background The Coast Guard’s Structure, Resources, and Missions Coast Guard’s Organizational Structure The Coast Guard employs a multi-level organizational structure, as shown in figure 1. The Coast Guard provides commanders at each level the authority and discretion to conduct operations within their operational areas. Command and control begins at Coast Guard headquarters, which is responsible for developing national strategies and policies for operations. However, Coast Guard headquarters does not exercise direct operational control of assets. Rather, the Commandant apportions this control to the two Area commanders. The two Area commanders—one for the Atlantic Area Command and one for the Pacific Area Command—are responsible for translating policy into operational objectives through theater plans for Coast Guard missions. The Coast Guard has nine districts that report to the Area Commands. District commanders are responsible for regional operations and they assume tactical control of allocated resources to execute operations and missions within their areas of responsibility. The nine Coast Guard districts are supported by 37 sectors. Sector commanders are responsible for local operations within each district. Sector commanders assume tactical control of allocated resources to execute operations and missions within their areas of responsibility. Each of the Coast Guard Area commands, districts, and sectors is responsible for managing its assets and accomplishing missions within its geographic area of responsibility and for the purposes of this report are referred to as field units. Coast Guard Assets and Personnel The Coast Guard uses a variety of assets to conduct its mission responsibilities. The Coast Guard’s assets consist of aircraft and vessels. The Coast Guard operates two types of aircraft—fixed wing (airplanes) and rotary wing (helicopters). Fixed wing aircraft operate from Air Stations and airports, whereas rotary wing aircraft operate from Air Stations, flight- deck equipped cutters, or other locations that could support flight operations. Similarly, the Coast Guard operates two types of vessels— cutters and boats. A cutter is any vessel 65 feet in length or greater, having adequate accommodations for crew to live on board. Larger cutters (major cutters), over 179 feet in length, are generally under the control of Area Commands and cutters 175 feet or less in length come under control of District Commands. In contrast, all vessels less than 65 feet in length are classified as boats and usually operate near shore and on inland waterways. As of the end of fiscal year 2015, the Coast Guard’s assets included 61 fixed wing aircraft, 142 rotary wing aircraft, 40 major cutters, 205 cutters, and 1,750 boats. For a more detailed listing of these Coast Guard’s assets, see appendix I. To crew its aircraft and vessels and to plan, manage, and carry out its mission responsibilities, the Coast Guard relies on a staff of active duty, reserve duty, and civilian personnel. As of the end of fiscal year 2015, the Coast Guard had 54,425 employees—39,116 active duty (6,566 officers, 1,728 Chief Warrant Officers, and 30,822 enlisted) personnel; 7,109 reservists; and 8,200 civilians. Coast Guard Strategic Commitments Strategic commitments are annual, up-front commitments of resources made at the headquarters level and are deemed by the Coast Guard as critical to the implementation of national, Department of Homeland Security, and Commandant strategic priorities. Among other things, strategic commitments specify the amount of time certain types of Coast Guard assets are to be operating in support of these activities, and these resource allocations serve as minimum levels of activity that field unit commanders are expected to provide. An example of a strategic commitment is supporting counter drug missions in the Western Caribbean and Eastern Pacific in coordination with other federal law enforcement or Department of Defense agencies. Strategic commitments represent the Coast Guard’s highest priorities, so the Coast Guard allocates resources to these activities before it allocates the remaining resources to meet other field units’ missions. Coast Guard Missions The Coast Guard is responsible for 11 statutory missions, which are divided into non-homeland security and homeland security missions, as shown in table 2. The Homeland Security Act of 2002 requires that the authorities, functions, and capabilities of the Coast Guard to perform its missions be maintained intact and without significant reduction, except as specified in subsequent acts. It also prohibits the Secretary of Homeland Security from reducing “substantially or significantly…the missions of the Coast Guard or the Coast Guard’s capability to perform those missions.” Each fiscal year, the Coast Guard allocates resource hours to its field units for carrying out its 11 statutory missions based on the number and type of assets in those units at that time. During fiscal years 2010 through 2016, some missions were allocated more asset resource hours than others, as shown in figure 2. For example, for fiscal year 2016, the two missions with the highest allocation of asset resource hours were ports, waterways, and coastal security and aids to navigation. Conversely, the two missions with the lowest allocation of asset resource hours during that year were other law enforcement and marine environmental protection. Past Concerns about the Coast Guard’s Alignment of Resources to Meet Mission Needs In prior reports and testimonies, we have raised concerns about the Coast Guard’s difficulties in clearly and systematically allocating resources to accomplish its diverse missions. For example, in March 2004, we found that although the Coast Guard used a variety of mission performance measures, it did not have a systematic approach that would allow it to understand the linkage between resources expended and performance results achieved. We recommended, among other things, that the Coast Guard proceed with initiatives to account more completely for resources expended. In response, the Coast Guard developed the Mission Cost Model, which was to accurately capture the costs of mission-direct activities and the allocation of mission-support costs as they are incurred. We also previously reported that although the Coast Guard reports summary financial data by homeland security and non- homeland security missions to the Office of Management and Budget, as a multi-mission agency, the Coast Guard can be conducting multiple missions simultaneously. As a result, we stated that it is difficult to accurately determine the level of resources dedicated to each mission. The Coast Guard’s Process for Aligning Assets to Meet Mission Needs Recognizing the difficulty of determining resource needs in a multi- mission agency, the Coast Guard developed a process to help it better allocate its assets in line with its strategic commitments and statutory mission responsibilities. Specifically, since being implemented in fiscal year 2008, the Coast Guard has used the Standard Operational Planning Process (SOPP) for annually developing and communicating strategic commitments and allocating resource hours, by asset type (i.e., types of aircraft, cutters, and boats), throughout its chain of command for meeting mission needs. The SOPP is to provide guidance and direction, while preserving some autonomy for field unit commanders to conduct operations, as events require. As shown in Figure 3, as part of the SOPP, Coast Guard headquarters issues an annual Strategic Planning Direction, which is to be the primary mechanism for allocating resources and providing strategic direction to operational commanders at the Area, District, and Sector levels. To determine and plan for how assets are to be allocated, Coast Guard headquarters are to rely on mission priorities, data on historical and current-year mission performance, and operational and intelligence assessments. As part of the planning process, field commands are allocated resource hours by asset type to be used for meeting strategic commitments and executing the 11 statutory missions. The Strategic Planning Direction is annually disseminated to the two Area Commands that are then to disseminate their own Operational Planning Directions through their command levels, with each District command developing its own plan to cover its area of responsibility. The Area commanders develop a plan known as the Area Operational Planning Direction and District commanders develop a district level Operational Planning Direction. After assets are deployed, staff at the field units are to enter the assets’ actual resource hours used, by mission, into data systems. The asset resource hour data are consolidated on a quarterly basis as part of Operational Performance Assessment Reports. The historical and current-year operational data from these reports, as well as Planning Assessments, are to be communicated back to Coast Guard headquarters as part of the information to be used to develop the Strategic Planning Direction for the following year. Coast Guard Management Tools for Aligning Personnel to Meet Mission Needs The Coast Guard has also developed management tools to help it align its personnel with its missions. In particular, the Coast Guard has developed the Manpower Requirements Determination (MRD) system and the Sector Staffing Model (SSM) to facilitate management decisions on personnel requirements (see table 3). Coast Guard’s Data Systems Used to Record Its Mission Activities The Coast Guard collects and reports the number of hours its assets— aircraft, cutters, and boats—spend conducting missions. Coast Guard field unit personnel are to record asset resource hours used to accomplish a mission(s), by mission category (such as domestic ice breaking or marine environmental protection operations), into one of two operational reporting databases. The Asset Logistics Maintenance Information System (ALMIS) and the Abstract of Operations System (AOPS) capture asset resource hour data to support mission responsibilities. According to Coast Guard instructions, field units are to record at least one type of activity, such as one of the Coast Guard’s 11 statutory missions, within 24 hours after an asset is deployed. Staff at the relevant field units are to review and certify that the data entered are accurate. After the data have been entered, the Coast Guard Business Intelligence system is used to extract and combine resource and performance data each quarter to create Operational Performance Assessment Reports. Data on resource hours used by field units’ assets are included in these reports and are part of the feedback component of the SOPP whereby field units report data on asset usage to Coast Guard headquarters on a quarterly basis. The Coast Guard’s Process for Allocating Assets Has Limitations that Constrain Its Strategic Effectiveness Coast Guard Headquarters’ Strategic Planning Directions Reflect Asset Performance Capacities Rather Than Achievable Goals Coast Guard headquarters does not provide field units with strategic, realistic goals for allocating assets, by mission. Rather, headquarters’ allocations of assets in the Strategic Planning Directions that we reviewed for fiscal years 2010 through 2016 were based on assets’ maximum performance capacities. For example, the Strategic Planning Directions allocated each Hercules fixed wing aircraft (HC-130H) 800 hours per year, each Jayhawk helicopter (MH-60T) 700 hours per year, and each 210-foot or 270-foot medium endurance cutter (WMEC) 3,330 hours per year, irrespective of the condition, age, or availability of these assets. As a result, as shown in figure 4, the asset resource hours allocated in the Strategic Planning Directions have consistently exceeded the asset resource hours actually used by Coast Guard field units during fiscal years 2010 through 2015. For example, in fiscal year 2015, the Strategic Planning Direction allocated a total of 1,075,015 resource hours for field unit assets whereas the actual asset resource hours used was 804,048 hours, or about 75 percent of the allocated hours for that year. Coast Guard field unit officials we spoke with and Coast Guard planning documents we reviewed indicate that the Coast Guard is not able to achieve the resource allocation capacities set by the headquarters’ Strategic Planning Directions for several reasons, including asset condition and unscheduled maintenance. The field unit officials told us they provide Coast Guard headquarters with information on their assets’ availabilities through Operational Planning Directions, Operational Performance Assessment Reports, and Planning Assessments. For example, in its Planning Assessment for fiscal years 2015-2016, an Area Command noted that one of its classes of cutters was 50 years old and the cutters were hampered by mechanical failures requiring emergency dry dock repairs resulting in reduced availability to carry out their missions during the year. In another example, an Area Command stated in its fiscal year 2015 Operational Planning Direction that based on historical use, it planned for 575 hours per vessel for one type of cutter instead of the 825 hours performance capacity, as specified in the Strategic Planning Direction. Further, district officials we interviewed told us that they do not expect to use all of the boat asset resource hours allocated to their units because they do not have sufficient crews available, or needed maintenance prevents them from operating the boats at their capacity resource hours. Our analyses of Coast Guard resource hour data across asset types for fiscal years 2011 through 2015 show that actual asset use differed by asset type, but overall fell below asset resource hour projected capacities, as shown in figure 5. During this time period, the percent difference between resource hour capacities and actual resource hours used for rotary-wing aircraft were relatively close—for example, about 7 percent fewer hours were used than allocated for fiscal year 2015. In contrast, the percent difference between boat resource hour capacities and actual boat resource hours used during fiscal years 2011 through 2015 were more sizable—for example, about 35 percent fewer hours were used than allocated for fiscal year 2015. Our review of Coast Guard planning documents and discussions with field unit officials also show that Operational Planning Directions developed by field unit commands can differ from headquarters’ Strategic Planning Directions. For example, officials from one district told us that based on their analyses, they determined that their district could realistically use only about two-thirds of the performance capacity hours for boats allocated for one mission. Specifically, in fiscal year 2013, for the ports, waterways, and coastal security mission, the district’s Operational Planning Direction included 8,126 hours, or 63 percent, of the 13,000 hours allocated in headquarters’ Strategic Planning Direction, as shown in figure 6. The district officials stated that allocating 13,000 hours (total assets’ capacity) was not practical based on their analysis of the boat station locations and events requiring protection, among other things. District officials we met with told us that actual asset use for other missions was similarly below performance capacities, such as cutters that used about 75 percent of the capacity hours for the aids to navigation mission. These officials stated that the differences did not reflect an underutilization of their assets; rather, they considered the boat stations to be appropriately staffed with sufficient numbers of boats to meet mission demands. Thus, the capacity resource hours allocated to the district’s various missions through the SOPP do not align with the district’s actual asset resource hours used—as reported in the Operational Performance Assessment Reports. Because actual asset use has consistently fallen below asset performance capacities, headquarters’ Strategic Planning Directions have steadily overstated the amount of asset resource hours available to achieve the Coast Guard’s strategic commitments and missions, and there is not a direct alignment between the Coast Guard’s strategic goals and its prospects for achieving those goals. As a result, the headquarters’ strategic intent is not effectively communicated to field units when allocating asset resource hours. According to a Coast Guard Commandant Instruction, the SOPP is to effectively translate strategic intent to mission execution by, for example, issuing guidance and direction; setting performance targets; allocating resources; and providing effective feedback, including operational status and desired outputs. The Coast Guard Instruction also states that the intent of the Operational Performance Assessment Reports is for operational commanders to inform pertinent stakeholders about their resource utilization and mission performance, identify operational gaps, and provide a forecast of operational requirements for the next 4 quarters. In addition to the Coast Guard Commandant Instruction, Standards for Internal Control in the Federal Government states that for an agency to run and control its operations, it must have relevant, reliable, and timely communications relating to internal, as well as external events. Moreover, agencies should use quality information to achieve objectives and address related risks. Quality information should be appropriate, current, complete, accurate, accessible, and timely. Agency management can then use this information to make informed decisions and evaluate performance in achieving key objectives and addressing risks. Further, agencies should internally communicate the necessary, quality information to achieve the agency’s objectives. Coast Guard headquarters officials told us that they use assets’ maximum performance capacities as a basis for asset allocations in the Strategic Planning Directions because (1) they do not have the necessary information and methods to realistically predict the operational availability of all assets, and (2) they need to identify assets’ maximum performance capacities available to field units in the event of needed surge operations or to respond to emergency situations. With regard to asset operational availability, the Coast Guard is in the process of implementing the Coast Guard Logistics Information Management System (CG-LIMS), which is intended to improve information on assets’ operational availability by consolidating its legacy logistics systems into one system, and providing timely and accurate information on the location, movement, and operational status of assets, among other things. According to Coast Guard officials, CG-LIMS could provide more centralized and systematic information on the operational availability of the assets, such as when assets are scheduled to be in maintenance during the year. For example, beginning in fiscal year 2017, one major cutter—a National Security Cutter—is expected to be out of commission for about 1 year for needed structural enhancements. An official from a Coast Guard Area Command stated that the Area Command has planned for the reduction in available resource hours for this cutter, but it would be useful to have more systematic information on operational availability across all assets. In December 2014, CG-LIMS began consolidating data on one aircraft type and the system is to expand to support all Coast Guard aircraft and some of its boats by the end of 2018. Coast Guard officials noted, though, that a decision has not yet been made to expand CG-LIMS to consolidate the logistics systems of other assets, such as cutters. The officials said that if the Coast Guard makes a determination to include all of its assets into the new CG-LIMS system, it should provide more systematic and centralized operational data across all assets. With regard to the use of asset capacities, we do not disagree that information on assets’ performance capacities can help inform decisions regarding surge operations or emergency situations. However, in addition to asset capacities, information on assets’ actual performance in the Strategic Planning Directions would more effectively communicate the Coast Guard’s strategic intent and more closely align asset allocations to the field units’ actual use of the assets in carrying out their various missions. For example, as stated earlier, one district had sufficient numbers of assets to meet demands in one mission while about 25 percent under capacity hours. Coast Guard officials stated that although they consider Operational Performance Assessment Report data when determining the number of asset resource hours to allocate among the missions in the annual Strategic Planning Directions, they do not reduce the estimates of total asset capacity and align actual resource hour use accordingly. Until the Coast Guard implements CG-LIMS or another system for asset allocation, using current and accessible information from field units, such as Operational Performance Assessment Reports and Planning Assessments, to inform asset hour allocations in the annual Strategic Planning Directions—in addition to the asset performance capacities currently used—will better ensure that the Coast Guard is effectively communicating strategic intent to its field units, realistically identifying any operational limitations of its assets, and making more informed asset resource hour allocation decisions that are aligned with its strategic goals. Further, without this alignment, Coast Guard headquarters does not know the extent to which field units are effectively and meaningfully carrying out the intent of the Strategic Planning Directions, and field units do not have the benefit of headquarters’ strategic direction in terms of the actual use of their assets in carrying out missions. The Coast Guard is Taking Steps to Improve Data Quality for Resource Hours Used to Support Each Mission Coast Guard field officials we met with told us that total asset resource hours recorded in Operational Performance Assessment Reports are accurate, but noted that data on asset resource hours used to support each mission may not be accurate. As stated earlier, Coast Guard guidance states that units should report at least one primary employment category, such as one of the 11 statutory missions, for the time an asset is deployed. The officials told us that data on resource hours, by mission, for all assets may not be accurate because the Coast Guard does not have a systematic way for field units to (1) record time spent on more than one mission during an asset’s deployment or (2) consistently account for time assets spend in transit to designated operational areas. For example, officials from six of the nine Coast Guard districts we interviewed told us that they generally record one mission per asset deployment, even though each asset’s crew may have performed two or more missions during a deployment. Officials from the remaining three districts told us that if their assets’ crews perform more than one mission per deployment, the crews generally apportion the number of hours spent on each mission performed. Thus, for example, if a cutter is deployed on a ports, waterways, and coastal security mission and is diverted to an emergency search and rescue mission, the cutter’s crew would record the hours spent on each respective mission. The officials noted, though, that this may not be a consistent practice across all units. In September 2013, the Coast Guard began drafting guidance for field units to capture assets’ transit times in order to better account for both the direct and indirect costs of conducting missions. Area and district officials we met with told us that it is important to accurately capture the time an asset is in transit because, for example, it can sometimes take a number of days for a cutter to transit to an operational area to conduct its mission because of vast geographic areas of responsibility. As of February 2016, Coast Guard officials informed us that the Coast Guard was investigating potential solutions to enhance the current software and information technology systems’ capabilities, but did not have an estimated date for finalizing the guidance. The Coast Guard has acknowledged these data limitations and Coast Guard officials stated that the resource hour data were accurate enough for operational planning purposes. Further, the Coast Guard officials stated that the Coast Guard was in the process of determining how best to account for time spent by assets on multiple missions and in transit in order to obtain more accurate and complete data on the time assets spend conducting each of its missions. For example, in April 2014, the Coast Guard issued instructions to its field units to provide definitions, policies, and processes for reporting their operational activities and also established a council to coordinate changes among the various operational reporting systems used by different field units. These are positive steps and should help the Coast Guard address limitations that currently hinder its ability to accurately capture assets’ operational data. The Coast Guard Is Taking Steps to Track How Increased Strategic Commitments Affect Resource Hours Available for Other Missions In the headquarters’ Strategic Planning Directions, according to Coast Guard headquarters’ officials, the allocations of certain assets’ hours in support of strategic commitments has grown from fiscal year 2010 to fiscal year 2016, including commitments in support of the Coast Guard’s Western Hemisphere Strategy issued in September 2014 and the Department of Homeland Security’s Southern Borders and Approaches Campaign Plan issued in January 2015. These strategic commitments of assets are made at the headquarters level and, as stated earlier, are deemed critical to the implementation of national, Department of Homeland Security, and the Commandant’s strategic priorities. Headquarters and field unit officials we met with told us that it has become increasingly difficult to fulfill these growing strategic commitments when asset performance levels have generally remained the same or declined in recent years. For example, one Area Command stated that its ability to meet the strategic commitments and other priority missions was severely strained because of concerns over the reliability of some cutters in its fleet that are 50 years old and operating beyond their useful service lives. Area command officials stated that after meeting these priority missions, it has been challenging to respond to threats within their areas of responsibility with the remaining asset resource hours. Further, the Coast Guard Commandant testified before a congressional subcommittee in February 2015 that the Coast Guard’s mission demands continue to grow and evolve and that given the age and condition of some of its legacy assets, the success of future missions relies on the continued recapitalization of Coast Guard aircraft, cutters, boats, and infrastructure. To address these challenges, the Coast Guard is taking steps to provide more transparency regarding asset resource hours needed to support strategic commitments and the remaining resource hours available to field unit commanders. For example starting in fiscal year 2015, the Coast Guard began using a new data field to track the time assets spent supporting its Arctic strategy. Moving forward, these efforts will continue to be important if current trends continue—that is, actual asset performance levels remaining the same or declining and strategic commitments and other mission needs increasing. The Coast Guard Does Not Document the Extent to Which Risk Assessments Affect Asset Allocation Decisions The Coast Guard does not maintain documentation on the extent to which risk factors have affected the allocation of resource hours to missions through its Strategic Planning Directions. For example, Coast Guard officials told us that the Coast Guard conducts a National Maritime Security Risk Assessment every 2 years to inform its asset allocations; however, the Coast Guard does not document how these risk assessments have affected asset allocation decisions across its missions. Further, these officials told us that they consider this risk assessment, as well as other information, such as intelligence reports, to establish planning priorities across its 11 statutory missions in the Strategic Planning Directions. The officials added that changes made to Strategic Planning Directions’ resource allocations, by mission, are discussed in verbal briefings but are not formally documented. Specifically, Coast Guard officials stated that the National Maritime Security Risk Assessment informs allocations for 7 of the 11 statutory missions. For the remaining 4 missions, the Coast Guard relies on other factors—such as historic use of asset resource hours by mission and field unit Planning Assessments—to inform allocations for those 4 missions. Written statements provided to us by the Coast Guard indicate that all projections and changes to resource hours, such as changes made to allocations among missions in the Strategic Planning Directions, are to be documented throughout the planning process. In addition, SOPP guidance states that risk-informed methods and processes are to be incorporated to support establishing planning priorities across missions, performance targets, and force apportionment to better understand and articulate the impacts of shifting resources from one mission to another. Further, Standards for Internal Control in the Federal Government state that agencies should identify, analyze, and respond to changes and related risks that may impact internal control systems as part of its risk assessment process; and create and maintain documentation to provide evidence of the execution of these control activities. Coast Guard officials told us that while they have identified, analyzed, and incorporated risk factors as part of the SOPP, it is not their practice to maintain documentation on the extent to which risk factors have affected resource allocation decisions. Without documenting how risk factors have informed the asset allocation decisions, the Coast Guard lacks a record to help ensure that its decisions are transparent and the most effective ones for fulfilling its missions given existing risks. The Coast Guard Has Made Progress in Determining Workforce Needs, but Lacks Priorities for Remaining Workforce Requirements The Coast Guard Has Made Progress in Determining Workforce Requirements, but Does Not Have a Plan to Prioritize Remaining Work As stated earlier, a manpower requirements analysis (MRA) is to turn documented mission requirements into manpower requirements, which a unit can use to compare against actual personnel assigned. As shown in table 4, for its 134 unit types, the Coast Guard has completed 9 MRAs along with the accompanying manpower requirements determinations (MRD); as well as an additional 42 MRAs, as of December 2015. According to Coast Guard officials, unit types can represent an asset, such as the National Security Cutter, or an office, such as the Office of Civilian Human Resources. In June 2015, Coast Guard officials told us that based on current staffing levels, they estimate it could take 10 years to complete baseline MRAs for all Coast Guard units and were working on a strategy to prioritize and complete them. Further, these officials said that they cannot meet the demand for MRAs in a timely manner and that the units that can fund a contractor to conduct an MRA are the ones that are most likely to be completed. As of February 2016, the Coast Guard had not made progress on this strategy or established a process for prioritizing the MRA workload. Coast Guard guidance states that the MRA sponsor—such as the heads of the 134 unit types mentioned above—are to use the data provided in MRAs and decide if the personnel requirements recommended by the MRAs are feasible in the context of the program’s overall strategies, goals, and objectives. Coast Guard MRA guidance states that the Coast Guard should seek efficient staff, overhead, and support organizations with a goal of ensuring that high priority mission activities are fully supported. Further, the Coast Guard’s January 2016 Human Capital Strategy states that when an adjustment to personnel strength or competencies is necessary, the MRD process is the primary tool to be used by planners to define the human capital required to accomplish the mission. The Standard for Program Management calls for agencies to engage in (1) resource planning to determine which resources are needed and when they are needed to successfully implement the program, and (2) resource prioritization to allow the program manager to prioritize critical resources that are not available in abundance and to optimize their use across all program components. A Coast Guard official in charge of MRAs and MRDs told us in December 2015 that the Coast Guard has not issued the strategy because it does not have sufficient resources. In particular, the official noted that the Coast Guard does not have enough staff and lacks a system to store analyses from previously completed MRAs—such as standard workweek calculations for different personnel—that could help analyze the MRA workload and facilitate better risk management decision making. Because the Coast Guard does not have a systematic process that allows it to prioritize critical resources and to optimize their use across all program components, it faces risks in its ability to identify and prioritize the most important MRAs to complete and does not have reasonable assurance that the high priority mission activities are fully supported with the appropriate number of staff possessing the requisite mix of skills and abilities. Most Staffing Changes Identified in the Sector Staffing Model Are to Be Implemented by the End of 2017 In 2012, the Coast Guard implemented what it called the Sector Staffing Model (SSM) to redistribute and balance existing personnel across its sectors, based on its analyses of the sectors’ workloads from about 2009 through 2012. Coast Guard officials told us that, given overall limited resources, the sectors were staffed at lower staffing levels than were identified in the SSM. Officials we interviewed at the two Area Commands and nine districts stated that they thought that implementing the SSM was an important step in analyzing sector workload and balancing personnel across the sectors to meet workload demands. In total, the SSM involved the redistribution of about 1,400 positions, including about 1,280 active duty and 122 civilian personnel. Coast Guard officials told us that beginning in 2014, active duty positions identified in the SSM began to be redistributed through normal active duty transfer cycles and the officials noted that they expected all active duty position redistributions to be completed by the end of 2017. As of the end of 2015, 1,167 of the 1,280 active duty positions and 57 of the 122 civilian positions identified in the SSM had been redistributed. According to business rules the Coast Guard established for implementing the SSM, changes to civilian positions identified by the SSM did not require mandatory transfers or positions to be vacated in order to minimize disruption to the civilian workforce. This has resulted in staffing challenges for some field units. Officials we spoke with at seven of the nine districts stated that they faced staffing challenges because targeted civilian positions could not be redistributed until the civilians voluntarily transfer to a different position or retire. For example, officials from one district told us that one of its sectors was waiting for a civilian specialist to help manage hazardous materials, but they could not fill the position until a targeted civilian position was vacated. Further, officials at another district told us that one of its sectors was waiting for a civilian port security specialist, but the sector could not fill this position until a civilian administrative position was vacated in another sector. Because the business rules state that changes to civilian positions were not mandatory, it could be a number of years before some civilian positions are vacated, if the civilians in those positions have no desire to move to a different position and have years to work before they retire. Coast Guard headquarters officials told us they recognized that staffing gaps would remain in some civilian positions after SSM implementation, but noted they were waiting for the normal active duty transfer cycles to be completed by the end of 2017 before considering any updates to the SSM. Further, the officials said they were cognizant of the difficulties that some field units are facing since some needed civilian positions have not been filled or some unneeded civilian positions have not been vacated as identified in the SSM. These officials noted, though, that SSM business rules state that field units can request headquarters’ consideration of staff reprogramming proposals to make changes to their existing staff to align with the SSM and that they have been working to rectify these staffing imbalances and accommodate staffing changes as field units make staff reprogramming requests. Conclusions Given the declining availability of its aging assets and the constrained budgets in recent years to replace legacy assets, together with growing strategic commitments, the Coast Guard will continue to face critical decisions about how to best allocate its limited assets to meet its mandated mission responsibilities. The Coast Guard uses the Standard Operational Planning Process (SOPP) to allocate asset resource hours to its field units for meeting their missions, but this planning process allocates maximum asset resource hour capacities and does not also include more realistic operational targets. However, by incorporating data that field units provide to Coast Guard headquarters on assets’ performance—such as Operational Performance Assessment Reports and Planning Assessments—to inform asset hour allocations in headquarters’ annual Strategic Planning Directions, the Coast Guard would be better positioned to ensure it is identifying any operational limitations of its assets, making more informed asset resource hour allocation decisions, and more effectively communicating strategic intent to its field units. The Coast Guard does not maintain documentation on the extent to which risk factors have affected the allocation of asset resource hours to missions through its annual Strategic Planning Directions. Without such documentation, the Coast Guard lacks transparency and a record to help ensure that its asset allocation decisions are the most effective ones for fulfilling its missions given existing risks. The Coast Guard has developed management tools, such as manpower requirements determinations, to help it strategically align its personnel with its missions, but Coast Guard officials state they cannot meet the demand for these analyses and have not established a process to prioritize them because they do not have sufficient staff and lack a system to help analyze the MRA workload. Because the Coast Guard does not have a systematic process for identifying and prioritizing the most important manpower requirements analyses to complete, it does not have reasonable assurance that the highest priority missions are fully supported with the appropriate number of staff possessing the requisite mix of skills and abilities. Recommendations for Executive Action We recommend that the Commandant of the Coast Guard take the following three actions: To improve the strategic allocation of assets, the Coast Guard should incorporate field unit input, such as information on assets’ actual performance from Operational Performance Assessment Reports and Planning Assessments, to inform more realistic asset allocation decisions—in addition to asset performance capacities currently used—in the annual Strategic Planning Directions to more effectively communicate strategic intent to field units. To improve transparency in allocating its limited resources, and to help ensure that its resource allocation decisions are the most effective ones for fulfilling its missions given existing risks, the Coast Guard should document how the risk assessments conducted were used to inform and support its annual asset allocation decisions. To ensure that high priority mission activities are fully supported with the appropriate number of staff possessing the requisite mix of skills and abilities, the Coast Guard should develop a systematic process that prioritizes manpower requirements analyses for units that are the most critical for achieving mission needs. Agency Comments and Our Evaluation In April 2016, we requested comments on a draft of this report from the Department of Homeland Security (DHS) and the Coast Guard. Both DHS and the Coast Guard provided technical comments, which we have incorporated into the report, as appropriate. In addition to its technical comments, DHS provided an official letter for inclusion in the report, which can be seen in appendix II. With regard to the first two recommendations, the Coast Guard stated that it was taking actions, such as incorporating field unit input contained in Operational Performance Assessment Reports and Planning Assessments, and documenting how risk assessments conducted were used to inform its annual asset allocation and program direction to field units. If implemented as described in the fiscal year 2017 Strategic Planning Direction to be issued by October 2016, this would meet the intent of these recommendations. Further, with regard to the third recommendation, the Department stated that the Coast Guard would be prioritizing manpower requirements analyses of unstudied units and incorporating all available manpower data into future personnel decisions, as resources permit, by October 2016. If implemented as described, this would meet the intent of the recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and the Commandant of the Coast Guard. In addition, the report will be available at no charge on the GAO website at http:www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact Points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Coast Guard Assets as of the End of Fiscal Year 2015 Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Christopher Conrad, Assistant Director; Nancy Kawahara, Analyst in Charge; Dominick Dale; Michele Fejfar; Holly Halifax; Eric Hauswirth; Carol Henn; Bonnie Ho; Tracey King; Ying Long; Alexandra Squitieri; and John Yee all made key contributions to this report. Related GAO Products Coast Guard Acquisitions: Enhanced Oversight of Testing Could Benefit National Security Cutter Program and Future DHS Acquisitions, GAO-16-314T. Washington, D.C.: February 3, 2016. National Security Cutter: Enhanced Oversight Needed to Ensure Problems Discovered during Testing and Operations Are Addressed, GAO-16-148. Washington, D.C: January 12, 2016. Coast Guard: Timely Actions Needed to Address Risks in Using Rotational Crews, GAO-15-195. Washington, D.C.: March 6, 2015. Coast Guard Acquisitions: Better Information on Performance and Funding Needed to Address Shortfalls, GAO-14-450. Washington, D.C.: June 5, 2014. Coast Guard: Clarifying the Application of Guidance for Common Operational Picture Development Would Strengthen Program, GAO-13-321. Washington, D.C.: April 25, 2013. Coast Guard: Portfolio Management Approach Needed to Improve Major Acquisition Outcomes, GAO-12-918. Washington, D.C.: September 20, 2012. Coast Guard: Legacy Vessels’ Declining Conditions Reinforce Need for More Realistic Operational Targets, GAO-12-741. Washington, D.C.: July 31, 2012 . Homeland Security: Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies, GAO-12-751R. Washington, D.C.: May 31, 2012. Coast Guard: Action Needed As Approved Deepwater Program Remains Unachievable, GAO-11-743. Washington, D.C.: July 28, 2011. Coast Guard: Observations on Acquisition Management and Efforts to Reassess the Deepwater Program, GAO-11-535T. Washington, D.C.: Apr. 13, 2011. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained, GAO-10-790. Washington, D.C.: July 27, 2010. Coast Guard: Service Has Taken Steps to Address Historic Personnel Problems, but It Is too Soon to Assess the Impact of These Efforts. GAO-10-268R. Washington, D.C.: January 29, 2010. Coast Guard: Better Logistics Planning Needed to Aid Operational Decisions Related to the Deployment of the National Security Cutter and Its Support Assets, GAO-09-497. Washington, D.C.: July 17, 2009. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach, GAO-09-682. Washington, D.C.: July 14, 2009. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program, GAO-09-462T. Washington, D.C.: Mar. 24, 2009.
Following the terrorist attacks of September 11, 2001, the Coast Guard has been charged with expanded missions. Further, constrained budgets in recent years have underscored the importance of strategically allocating its assets and personnel to meet these missions. GAO was asked to review the Coast Guard's resource allocation process. This report addresses the extent to which the Coast Guard: (1) employs an effective process to strategically allocate assets to meet its missions, and (2) has determined workforce requirements and addressed identified personnel needs. GAO reviewed Coast Guard planning and workforce requirements documents and asset performance data for fiscal years 2010 through 2015. GAO also discussed the planning process and personnel needs with Coast Guard officials at headquarters; as well as at the two Area and nine District Commands. The Coast Guard developed and uses the Standard Operational Planning Process to annually allocate asset (aircraft and vessels) resource hours to field units for meeting missions, but the headquarters' Strategic Planning Directions used in this process do not provide field units with strategic, realistic goals. Rather, headquarters' Strategic Planning Directions allocate maximum resource hour capacities for each asset—such as 700 hours per Jayhawk helicopter per year. As shown below, these asset allocations have consistently exceeded actual asset resource hours used by field units. By better incorporating data on assets' actual use that field units provide to Coast Guard headquarters—such as Operational Performance Assessment Reports —to inform asset allocation goals in its Strategic Planning Directions, the Coast Guard would better ensure that it effectively communicates strategic intent to its field units and makes more informed asset allocation decisions that are aligned with its strategic goals. The Coast Guard has developed management tools, such as manpower requirements analyses, to help it determine workforce requirements and help align its personnel with its missions. However, a Coast Guard official responsible for these analyses stated that the Coast Guard cannot meet the demand for these analyses because it does not have sufficient staff and a system to help analyze and prioritize the manpower requirements analyses that need to be completed. Without a systematic process for prioritizing the most important manpower requirements analyses to complete, consistent with leading program management practices, the Coast Guard does not have reasonable assurance that the highest priority missions are fully supported with the appropriate number of staff possessing the requisite mix of skills and abilities.
Background SBA administers the 8(a) program, which includes determining whether it will accept requirements into the 8(a) program. Agencies must submit an offer letter to SBA identifying the requirement as well as any procurement history for the requirement, the estimated dollar amount, and, if the award is sole-source, the name of the particular 8(a) firm. SBA requires that agencies keep follow-on acquisitions in the 8(a) program until SBA releases them from the program. Our prior work—going back to 2006—has found that contracting officials have turned to tribal 8(a) firms as a quick, easy, and legal method of awarding contracts for any value, but that these benefits are not without oversight challenges for SBA, which is responsible for managing 8(a) firms’ participation in the program. For example, in January 2012, we found that SBA lacked the data and insight to enforce some of the new requirements put in place to ensure that tribal 8(a) firms do not operate in the program in perpetuity. We made a number of recommendations to improve oversight of tribal 8(a) firms, such as ensuring that a new SBA 8(a) database has the capability to track information on 8(a) contracts to help ensure SBA officials have the information necessary to enforce the 8(a) program regulations. SBA has taken some actions to address these recommendations, but some have not yet been addressed. In March 2016, we found that SBA’s oversight of tribal 8(a) firm’s participation in the 8(a) program continued to be a challenge and made several recommendations to help improve its oversight strategy. Our prior reports have also found that the number of sole-source 8(a) awards over $20 million at DOD overall has significantly declined over time—from 50 in 2008 to 4 in 2013. In December 2012, we reported that there was confusion regarding the requirements of the 8(a) justification. We recommended that the Administrator of the Office of Federal Procurement Policy (OFPP) promulgate guidance to clarify circumstances in which an 8(a) justification is required. OFPP, as chair of the FAR council which oversees changes to the acquisition regulations, is in the process of changing the regulations to address our recommendation. DOD Sole-Source 8(a) Contract Awards over $20 Million Have Declined While the Number of Competitive 8(a) Awards Has Increased in Recent Years From fiscal years 2006 through 2015, the number of sole-source 8(a) contract awards over $20 million at DOD started to decline in 2011 and remained low through 2015, while the number of competitive contract awards over $20 million increased in recent years. Consistent with findings from our past reports, we found that sole-source awards generally declined in both number and value since 2011, when the 8(a) justification requirement went into effect. DOD awarded 22 of these contracts from fiscal years 2011 through 2015, compared to 163 such contracts in the prior 5-year period (fiscal years 2006 through 2010). The most common products and services acquired through 8(a) contracts over $20 million—under both competed and sole-source contracts—are construction, facilities support services, and engineering services. Figure 1 shows the decline in sole-source 8(a) contracts over $20 million and trends in competitively awarded 8(a) contracts of the same size. Pursuant to the mandate, we reviewed DOD’s March 2015 report to Congress and found that DOD reported a different number of sole-source 8(a) contract awards over $20 million than what we found in fiscal years 2011 through 2013. DOD reported awarding 23 such contracts in fiscal year 2011, 8 in 2012, and 5 in 2013, while we identified 13, 3, and 4 such contracts in those fiscal years, respectively. Consistent with our findings, DOD found no sole-source 8(a) contracts over $20 million awarded in fiscal year 2014 and a general decline in such contract awards since 2011. Based on our analysis of DOD’s report and discussions with the official responsible for the report, the differences are due to DOD’s inclusion of individual task or delivery orders in its count of sole-source 8(a) contracts. We did not include individual task or delivery orders in our numbers because they are not subject to the 8(a) justification. We found that DOD awarded two sole-source 8(a) contracts over $20 million with a total value of over $87 million in fiscal year 2015, one of which did not complete the 8(a) justification as required. In both cases, the contracting officers explained that they used sole-source 8(a) contracts because they were pressed for time. In one case, a Marine Corps contracting officer awarded a $24 million sole-source 8(a) contract for vehicle maintenance and repair to the incumbent contractor. However, the contracting officer did not complete an 8(a) justification as required. This contracting officer told us that the previous contracting officer for the requirement had retired without warning and there was limited time to award the follow-on contract. The contract went through several levels of review, including general counsel, and no one realized that a justification was required. She explained she had very limited experience awarding 8(a) contracts and was not aware of the required 8(a) justification. In fact, our analysis of FPDS-NG data showed that this office had not awarded any other sole-source 8(a) contracts over $20 million since fiscal year 2006. We did not pursue this issue because the contracting officer recognized that more training was needed and noted that the director of her office was developing training to address this issue. She further noted that there will not be a follow-on contract for this particular requirement, as the service is no longer needed. In the other case, Army officials said that they had a limited amount of time to ensure continuity of an engineering services contract for $63 million after a contract awarded through full and open competition was protested twice by the previous vendor. Contracting staff said that a sole-source 8(a) contract, which was not awarded to the previous vendor, was the only way to meet the requirement without a gap in services. Officials explained that they decided to shorten the period of performance on the sole-source contract from 5 years to 3 years and that the follow-on contract will be competed among 8(a) firms. Our analysis of the period of performance for sole-source and competitive 8(a) contracts over $20 million from fiscal years 2006 through 2015 showed that the average length of sole-source contracts has declined since 2012, while the average length of competitive contracts has remained more consistent over the time period. In fiscal year 2006, the average period of performance for a sole-source 8(a) contract was 4.8 years, yet starting in 2012 the average period of performance—which was 3.7 years—began to decline. By fiscal year 2015, the average period of performance was 2.1 years. For competed contracts 8(a) over $20 million, the contract length has consistently averaged between 4 and 5 years since 2007. Since 2011, tribal 8(a) firms have won an increasing number of competitively awarded 8(a) contracts over $20 million at DOD. Although these firms represent less than 10 percent of the overall pool of 8(a) contractors, the number of competitively awarded DOD 8(a) contracts over $20 million to tribal 8(a) firms grew from 26 in fiscal year 2011—or 20 percent—to 48 contracts in fiscal year 2015—or 32 percent of the total. In addition, since 2011, tribal 8(a) firms have consistently won higher value awards than other 8(a) firms for competitive 8(a) contracts over $20 million. In fiscal year 2015, the average award size of a competed 8(a) contract to a tribal 8(a) firm was $98 million, while other 8(a) firms had an average award size of $48 million. See appendix II for more data on awards to tribal versus other 8(a) firms. DOD Officials Cited a Variety of Reasons They Are No Longer Awarding Sole- Source 8(a) Contracts over $20 Million Contracting officials from the 14 offices in our review stated that there is, in general, a renewed agency-wide emphasis on competition. Whereas in the past they used sole-source 8(a) contracts to quickly, easily, and legally meet agency needs—as we have previously reported—officials explained that awarding large sole-source 8(a) contracts is less palatable in the current environment. Our review of 9 sole-source 8(a) contracts and their follow-on procurements supported this view. Five of the 9 sole- source 8(a) contracts we reviewed were competed in the follow-on contracts, and, for most of the remaining 4 contracts with follow-on sole- source contracts, officials stated that they plan to competitively award future procurements or are already in the process of doing so. In addition, DOD officials from almost half of the offices noted that a decline in their budgets or a decline in the size of their requirements rendered the 8(a) justification not applicable because most of their contracts fall below the $20 million threshold. Finally, DOD officials had varying opinions about whether the 8(a) justification was a deterrent to awarding large sole- source 8(a) contracts. Some noted that the 8(a) justification review process would deter them, while others said they would award a sole- source contract over $20 million if they found that only one vendor could meet the requirement. Preference for Competition and Decreasing Requirements Reported as Primary Reasons for Decline DOD officials we spoke with said that they prefer to compete high-dollar awards and reported a renewed agency-wide emphasis on competition, which steered them away from awarding large sole-source 8(a) contracts. Our prior work has found that agencies liked to use the unique provisions of tribal 8(a) contracting because they could quickly, easily, and legally award contracts to meet agency needs. However, in 2012, we reported that contracting officers—which included DOD contracting officers—were moving away from sole-source contracts to tribal 8(a) firms and toward competition. We noted examples where follow-on requirements were subsequently competed, resulting in estimated savings, according to agency officials. We also reported that with regard to Alaska Native Corporations, the Acting Deputy Assistant Secretary of the Army (Procurement) issued a memo in January 2011, citing the increased attention around the Army’s extensive use of sole-source contracts awarded to 8(a) Alaska Native Corporation firms. It stated that high-dollar sole-source awards to 8(a) Alaska Native Corporation firms should be the exception rather than the rule and laid out the expectation that these awards be scrutinized to ensure they are in the government’s best interest. Officials we spoke with on this review echoed some of these sentiments—noting that they had to consider whether the benefits of awarding a large sole-source 8(a) contract outweighed the negative aspects. For example, DOD officials from four offices stated that they prefer competition because it is easier to determine price reasonableness as compared to a sole-source procurement. Some of these officials noted that it is harder to negotiate with vendors under a sole-source approach, especially when they lack the staff to handle complex negotiations associated with larger contracts. One official also said that he prefers competition because the process of the contracting office performing market research and outreach to 8(a) contractors promotes transparency, in contrast to larger sole-source 8(a) contracts directed to specific vendors which had faced increased scrutiny due to allegations of fraud. Of the 9 sole-source 8(a) contracts we reviewed that had follow-on awards, 5 had follow-on contracts that were competitively awarded, while the other 4 had follow-on contracts that were awarded on a sole-source basis. All of the sole-source follow-on contracts had values less than $20 million. See figure 2. Of the 5 contracts where the follow-on requirement was competed, the contract values and circumstances varied, as illustrated by the following examples: The Air Force awarded a 5-year $76 million competitive follow-on contract for base operations support to replace a 10-year $523 million sole-source contract. Officials told us that they made a concerted effort to engage different 8(a) firms for the follow-on contract because their market research indicated that multiple firms could meet the requirement. Although they had to award two short-term sole-source 8(a) bridge contracts to the incumbent to meet the requirement while preparing the competitive follow-on award, once the requirement was opened for competition, they received proposals from seven vendors and awarded the competitive contract to a new vendor. In addition to shortening the period of performance, they removed a construction component of the requirement, and officials told us the removed component is now being met through a multiple award IDIQ contract that was competitively awarded to six vendors. The contracting officials also stated that they were exploring possible ways to insource other functions previously performed under the sole-source contract. The Army competitively awarded a 4-year $140 million contract for base operations support, which was previously met by a 10-year $397 million sole-source 8(a) contract. The contracting office had to award three short-term sole-source bridge contracts to the incumbent vendor to provide more time to prepare a competitive follow-on award. However, during the competitive process, six 8(a) firms competed for the follow-on award and the current contract value is about $7 million less annually than the value of the most recent sole-source bridge contract. The follow-on award went to an 8(a) firm owned by the same tribal entity as the incumbent firm. The Navy awarded a competitive follow-on to a $49 million sole- source contract for engineering services. The requirement is now met by a multiple award IDIQ contract to six 8(a) vendors with a total value of $99 million. A Navy contracting official told us that the prior sole- source 8(a) contract was one of many contracts providing similar services across this particular command, and, in 2011, the contracting office implemented a new contracting strategy that would increase competition and meet its small business goals. This official noted that the multiple award contract is intended to avoid duplicative contracts across various contracting offices and that this new strategy is expected to result in cost savings. Officials from almost half of the 14 offices we spoke with noted that a decline in budgets or a decline in the size of requirements rendered the 8(a) justification not applicable because most of their contracts fall below the $20 million threshold. This was the case in all 4 of the contracts with sole-source follow-on contracts we reviewed, where requirements are currently being met by sole-source 8(a) contracts under $20 million, which are not subject to the 8(a) justification. The Army awarded a $17 million sole-source 8(a) contract for systems engineering services as a follow-on to a $31 million sole-source contract. An Army official told us that the requirement decreased due to management changes and anticipated growth in need that never materialized. As follow-ons to a 5-year $319 million sole-source 8(a) contract for pre-deployment training exercises, the Marine Corps awarded 2 one- year-long sole-source 8(a) contracts, slightly under $10 million apiece, within a 6-month period to the same vendor. The current contracting officer did not know why the previous contracting officer awarded separate contracts for this requirement. The market research for the more recent contract stated that there used to be a greater need for pre-deployment training and estimated that funding for the requirement had decreased from $85 million annually to less than $10 million in recent years. The contracting officer told us that he intends to competitively award future follow-on contracts. Army officials told us that they now use a series of smaller contracts to meet a technical support services requirement that used to be met by a $50 million sole-source contract. Officials explained that the change in acquisition strategy was due to a decline in the need for these services and an increase in efficiency by using various contracting vehicles, such as smaller sole-source 8(a) contracts, competitive 8(a) awards, and task orders from an existing government-wide IDIQ contract to meet the requirement. They added that they preferred this strategy to the original $50 million dollar contract, which was designed to cover a wide swath of requirements, and that they want to compete more contracts to replace the smaller sole-source contracts in the future. Even though the FAR states that competition is the preferred contracting approach, we previously reported—and officials reaffirmed during this review—that large sole-source 8(a) contracts were an expedient way to respond to an increased workload and volume of requirements in the mid to late 2000s. For example, one office we spoke with said that the total value of the contracts awarded out of his office each year had increased from $200 million to $1.2 billion over a 10-year period while staff levels remained constant. Now that the growth in requirements has slowed while staff levels have risen and the workload has become more manageable, the contracting officers have more time to prepare competitive awards. DOD Officials’ Opinions Varied on Whether the 8(a) Justification Will Affect Future Awards of Sole- Source 8(a) Contracts over $20 Million DOD contracting and small business officials had differing views about whether the 8(a) justification was a deterrent to awarding sole-source 8(a) contracts over $20 million. A few officials noted that the 8(a) justification is “not that hard to write,” but it does act as a deterrent because people have to be willing to review and sign it, which could slow down the contracting process. Others said that the ability to award sole-source 8(a) contracts is “another tool in the toolbox” that they would be open to using, even with the justification requirement, if their market research results showed that only one 8(a) firm could perform the work. However, a Navy official noted that a sole-source 8(a) acquisition strategy would be hard to justify because her office typically awards contracts for services, such as engineering, that are likely to have multiple vendors that can meet the requirement. She noted that some of the vendors that were previously awarded sole-source contracts are now winning competitively awarded contracts as part of the office’s shift towards competition. Army officials responsible for one of the contracts in our review told us that they viewed the 8(a) justification as a means of increasing transparency and that it was not overly burdensome. In fact, they explained that they had recently completed an 8(a) justification for a contract awarded in December 2014—not in our sample—that they modified to exceed $20 million in potential obligations after the original contract was awarded for less than $20 million. The justification explained that the modification would extend the length of the contract and allow enough time for the contracting office to award a competitive follow-on contract without a gap in service. We have previously reported that contracting officials told us that it was not clear to them if a justification would be required for modifications to sole-source 8(a) contracts. We recommended that the Administrator of OFPP promulgate guidance to clarify circumstances in which an 8(a) justification is required, including cases when the contract is modified. OFPP generally agreed with our recommendation and is in the process of amending the FAR to address this issue. Clarifying the regulations for these instances could help; we found in our analysis of FPDS-NG data that DOD awarded six sole- source 8(a) contracts with base and all options values between $19 million and $20 million in fiscal year 2014 and eight in fiscal year 2015. If these contracts experience price growth, contracting officers may look to the updated regulations for guidance on whether an 8(a) justification is necessary. Agency Comments The Department of Defense had no comments on a draft of this report. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or MackinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this review were to determine (1) trends among Department of Defense (DOD) sole-source and competitive 8(a) awards over $20 million from fiscal year 2006 through fiscal year 2015; and (2) the factors to which DOD officials attribute these trends. The Consolidated and Further Continuing Appropriations Act of 2015 contained a provision for us to assess the impact of the justification and approval for sole-source 8(a) contracts over $20 million, which we refer to as 8(a) justification. The mandate also contained a provision for us to evaluate a DOD report—also mandated in this law—on the effect of the 8(a) justification on sole-source 8(a) contracts awarded over $20 million. To do so, we reviewed DOD’s March 2015 report on sole-source 8(a) contracts over $20 million and compared the number of contracts DOD reported that met this criterion to the number of contracts we identified. To determine the trends among DOD sole-source and competitive 8(a) awards over $20 million, we analyzed contract data from Federal Procurement Data System-Next Generation (FPDS-NG) for contracts awarded from October 1, 2005 through September 30, 2015. We used the Base & All Options and obligations values to identify contracts over $20 million, as well as contracts between $19 and $20 million. We took several measures to assess the reliability of the FPDS-NG data: We found that many of the competed 8(a) contracts in our dataset were multiple award contracts in which more than one vendor was awarded a contract for a single solicitation. We rolled up the multiple award indefinite-delivery indefinite-quantity contracts in the FPDS-NG dataset into a single entry per requirement to avoid double counting Base & All Options values in the summations of total contract values. Because we have not previously reported on the contract value data for competed 8(a) contracts, we took additional steps to assess the reliability of these data. We reviewed 36 8(a) contracts over $20 million that were coded as competed—which represents approximately 3 percent of the total number of competed contracts for fiscal years 2006 through 2015—for data reliability purposes and found 1 sole-source contract that was miscoded as competed. We also confirmed contract values for 15 competed 8(a) contracts that were part of the same solicitation but had different contract values. We obtained these contracts using DOD’s Electronic Document Access website, a repository for DOD contracts and related documents. We also compared the data reported in FPDS-NG to the information in the contract files we reviewed. We determined that the data for this period were sufficiently reliable to identify competed and sole-source 8(a) contracts over $20 million and describe trends in these contracts. To identify what is being purchased with 8(a) contracts over $20 million, we identified common products and services using North American Industry Classification System (NAICS) codes. The NAICS assigns codes to all economic activity within 20 broad sectors, and the codes reflect the industry in which the firm operates, e.g., wireless telecommunication carriers or industrial building construction. Our analysis of FPDS-NG data for trends among tribal 8(a) firms included firms owned by Alaska Native Corporations, Indian Tribes, and Native Hawaiian Organizations. To determine the factors to which DOD officials attribute these trends, we interviewed officials and reviewed contract files from 14 DOD contracting offices that met the following criteria: The 10 contracting offices that awarded the most sole-source 8(a) contracts over $20 million from fiscal years 2006 through 2015. We identified these offices using FPDS-NG data and confirmed that the offices with the most contract awards also had a high total value of sole-source 8(a) contract awards, either in obligations or Base & All Options value. We selected two offices that had large requirements previously met by sole-source 8(a) contracts. These contracts were identified in our past work on this topic. We wanted to determine how these requirements are currently being met. We selected an Air Force base operations support services requirement and a U.S. Special Operations Command translation services requirement. The two offices that awarded sole-source 8(a) contracts over $20 million in fiscal year 2015. Our final sample of DOD contracting offices included in our review were as follows: Defense Logistics Agency Troop Support, Philadelphia, Pennsylvania Kirtland Air Force Base, New Mexico Marine Corps Logistics Command, Albany, Georgia Marine Corps Systems Command, Quantico, Virginia National Guard Bureau, Arlington, Virginia Space and Naval Warfare Systems Center Atlantic, North Charleston, U.S. Army Contracting Command, Natick, Massachusetts U.S. Army Contracting Command, Redstone Arsenal, Alabama U.S. Army Contracting Command – Aberdeen Proving Ground, U.S. Army Corps of Engineers, Fort Worth District U.S. Army Corps of Engineers, Philadelphia District U.S. Army Corps of Engineers, Savannah District U.S. Army Mission and Installation Contracting Command, Fort U.S. Special Operations Command, MacDill Air Force Base, Florida For each of the 14 offices, we judgmentally selected one sole-source 8(a) contract over $20 million for review, and, where applicable, we reviewed subsequently awarded contracts for the same requirement, which we refer to as follow-on contracts. Specifically, we reviewed: Nine sole-source 8(a) contracts that had follow-on contracts. We confirmed with agency officials that a contract had a follow-on procurement. We interviewed the contracting officials responsible for each contract and reviewed relevant paperwork from the contract file. Our contract file review included looking at follow-on contracts, 8(a) justifications, acquisition plans, and other contract documents to compare old and new contracting approaches and determine any effects of the 8(a) justification. We also met with small business officials to discuss general trends and their contracting offices’ use of the 8(a) program. Two sole-source 8(a) contracts over $20 million that did not have follow-on contracts. We selected these contracts because they were awarded by offices that were top users of sole-source 8(a) contracts over $20 million from fiscal years 2006 through 2015. The offices confirmed that the two selected contracts, in addition to the other sole- source 8(a) contracts over $20 million awarded by the office, were for construction services and would not have follow-on contracts. We interviewed contracting officials about their experiences with the 8(a) justification requirement more generally and sole-source 8(a) contracts over $20 million. Two sole-source 8(a) contracts awarded in fiscal year 2015. One sole-source 8(a) contract that did not have a follow-on contract because the contracting officials were in the process of awarding the follow-on; we looked at the solicitation to confirm their acquisition strategy. We conducted this performance audit from August 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of Trends in Tribal 8(a) Awards over $20 Million Since 2011 When the 8(a) Justification Went into Effect Appendix II: Analysis of Trends in Tribal 8(a) Awards over $20 Million Since 2011 When the 8(a) Justification Went into Effect 33 (24%) 104 (76%) 33 (34%) 63 (66%) 36 (31%) 82 (69%) 48 (32%) 101 (68%) Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Michele Mackin, (202) 512-4841 or MackinM@gao.gov. Staff Acknowledgments In addition to the contact named above, Tatiana Winger (Assistant Director), Peter Anderson, Miranda Berry, Kurt Gurka, Julia Kennon, Roxanna Sun, and Jocelyn Yin made key contributions to this report.
The Small Business Administration's 8(a) program is one of the federal government's primary vehicles for developing small businesses. Tribal 8(a) firms, such as firms owned by Alaska Native Corporations, can win sole-source contracts for any dollar amount in the 8(a) program, while other 8(a) firms generally must compete for contracts valued above certain dollar thresholds. In March 2011, the Federal Acquisition Regulation was amended to include a new requirement for a written justification for sole-source 8(a) awards over $20 million, where previously no justification was required. GAO has previously reported on tribal 8(a) contracting and recommended improved oversight. The Appropriations Act of 2015 contained a provision for GAO to assess the impact of the 8(a) justification at DOD. This report addresses: (1) trends among DOD sole-source and competitive 8(a) awards from fiscal years 2006 through 2015; and (2) the factors to which DOD officials attribute these trends. GAO analyzed data from the Federal Procurement Data System-Next Generation, reviewed 14 sole-source 8(a) contracts over $20 million—9 of which had been followed by additional contracts for the same requirement, and spoke with contracting officials. GAO judgmentally selected most of the 14 contracts from offices that had awarded numerous sole-source 8(a) contracts in the past. GAO is not making any recommendations in this report. The Department of Defense had no comments on a draft of this report. The number of sole-source contracts over $20 million that the Department of Defense (DOD) awards to small businesses under the 8(a) program has been steadily declining since 2011 when the new requirement for a written justification for these contracts went into effect. In contrast, the number of competitive 8(a) contracts over $20 million has increased in recent years (see figure). Between GAO's last report on this topic in September 2014 and the end of fiscal year 2015, DOD awarded two sole-source 8(a) contracts over $20 million—one for vehicle maintenance and repair and one for engineering services. The contracting officer for the vehicle repair contract told GAO that the service will not be needed in the future, while the contracting officer for engineering services stated that he intends to competitively award the next contract for these services. Tribal 8(a) firms eligible for sole-source contracts over $20 million have won a growing number of competed 8(a) contracts since the justification went into effect in 2011. DOD contracting officials GAO spoke to overwhelmingly cited an agency-wide emphasis on using competition to obtain benefits, such as better pricing, as a reason for the decline in the use of sole-source 8(a) contracts over $20 million. Further, officials from almost half of the offices also noted that a decline in their budgets or the amount of goods and services needed rendered the 8(a) justification not applicable because most of their contracts fall below the $20 million threshold. Of the 9 sole-source 8(a) contracts GAO reviewed with subsequently awarded contracts, over half were ultimately competed. For example, the Army competitively awarded a 4-year $140 million contract for base operations support, a service that had been previously met by a 10-year $397 million sole-source 8(a) contract.
Background USAID is the lead U.S. agency for administering humanitarian and economic assistance to about 160 countries. The USAID Administrator reports to the Secretary of State and receives overall foreign policy guidance from the Department of State. USAID operates its foreign assistance programs from its offices in Washington, D.C., and from missions and offices around the world. In 1993, we reported that USAID had not adequately managed changes in its overseas workforce and recommended that USAID develop a comprehensive workforce planning and management system to better identify staffing needs and requirements. In the mid-1990s, USAID reorganized its activities around strategic objectives and began reporting in a results-oriented format but had made little progress in personnel reforms. In July 2002, we reported that USAID could not quickly relocate or hire the staff needed to implement a large- scale reconstruction and recovery program in Latin America, and we recommended actions to help improve USAID’s staffing flexibility for future disaster recovery requirements. Appendix I summarizes several reports and studies prepared by GAO and others since 1989 that address USAID workforce planning and human capital management issues. Studies by several organizations, including GAO, have shown that highly successful service organizations in both the public and private sectors use effective strategic management approaches to prepare their workforces to meet present and future mission requirements. We define strategic workforce planning as focusing on long-term strategies for acquiring, developing, and retaining an organization’s workforce and aligning human capital approaches that are clearly linked to achieving programmatic goals. Based on work with the Office of Personnel Management, other U.S. government agencies, the National Academy for Public Administration, and the International Personnel Management Association, we identified strategic workforce planning principles used by leading organizations. According to these principles, an organization’s strategic workforce planning and management system should (1) involve senior management, employees, and stakeholders in developing, communicating, and implementing the workforce plan; (2) determine the agency’s current critical skills and competencies and those needed to achieve program results; (3) develop strategies to address gaps in critical skills and competencies; and (4) monitor and evaluate progress and the contribution of strategic workforce planning efforts in achieving program goals. USAID’s Changing Workforce Affects Ability to Deliver Foreign Assistance USAID has changed from an agency of U.S. direct-hires that largely provided direct, hands-on implementation of development projects to one that manages and oversees the activities of contractors and grantees. During the past decade, this trend has affected USAID’s ability to implement its foreign assistance program as the number of U.S. direct-hire foreign service officers declined and much of USAID’s direct-hire workforce was replaced by foreign national personal services contractors. In addition, while program funding remained relatively stable from fiscal year 1990 through fiscal year 2000, it increased from $7.3 billion in fiscal year 2001 to $11.5 billion in fiscal year 2003, and USAID’s fewer direct hires are now responsible for programs in more countries with little or no resident U.S. direct-hire presence. Moreover, USAID operates in a difficult and uncertain environment that presents unique challenges to its ability to plan and manage its overseas workforce. Because USAID did not have a strategic workforce planning system while these changes were underway, several human capital vulnerabilities have surfaced. For example, an across-the-board reduction in force for both the foreign service and the civil service, followed by a 5-year decline in the number of U.S. direct- hires, has left the agency with critical shortages of experienced mid-level staff and in the pipeline of junior staff. In addition, 37 positions remain vacant, and opportunities for training and mentoring staff are limited, sometimes forcing the placement of staff who may lack essential skills and experience. USAID also lacks a “surge” capacity to help it deal with emerging crises and changing strategic priorities. According to USAID documents and our discussions with agency officials, these vulnerabilities are making it increasingly difficult for the agency to adequately manage and oversee its foreign assistance activities. USAID Staff Functions Have Evolved from Implementation to Management USAID’s U.S. direct-hire workforce decreased from about 8,600 in 1962 to about 3,162 in 1990. USAID could not continue its hands-on project approach as the number of U.S. direct hires, including foreign service staff, declined and responsibilities for planning, financing, and monitoring projects shifted to contractors, grantees, and host country governments. As figure 1 shows, this trend has continued as the number of U.S. direct- hire staff further decreased to 1,985 by December 2002. The number of foreign national employees—both direct-hires and personal services contractors—also decreased from 5,211 in fiscal year 1995 to 4,725 in fiscal year 2002. Furthermore, while program funding levels remained relatively stable for most of this period, program funding increased 57 percent from $7.3 billion in fiscal year 2001 to $11.5 billion in fiscal year 2003. As numbers of U.S. direct-hire staff declined, mission directors began relying on other types of employees, primarily foreign national personal services contractors, to manage mission operations and oversee development activities implemented by third parties. In December 2002, according to USAID’s staffing report, the agency’s workforce totaled 7,741, including 1,985 U.S. direct hires. Personal services contractors made up more than two-thirds of USAID’s total workforce, including 4,653 foreign national contractors (see fig. 2). Of the 1,985 U.S. direct hires, 974 were foreign service officers, about 65 percent of whom were posted overseas. For our analysis, we used the workforce definition developed by USAID’s 1990 Workforce Planning Working Group. This group defined the agency’s workforce as those who have a direct employer-employee relationship with USAID. This includes the following staff categories: U.S. citizen direct-hire civil service in Washington, D.C.; U.S. citizen direct-hire foreign service, most of whom serve at overseas missions; foreign national (non-U.S. citizen) direct hires, whom USAID can employ overseas for any foreign service-related mission, program, or activity; and personal services contractors, both U.S. and foreign nationals, who are individuals on contract with USAID for the specific services of that individual only. In addition, USAID includes in its monthly staffing report other types of nondirect-hire staff with an employer-employee relationship, such as staff detailed from a number of organizations and other U.S. government agencies and centrally contracted technical advisors. Other individuals not directly employed by USAID also perform a wide range of services in support of the agency’s programs. These individuals include employees of institutional or services contractors, private voluntary organizations, and grantees. Last year, we reported that USAID relies heavily on nongovernmental organizations to deliver foreign assistance. In fiscal year 2000, USAID directed about $4 billion of its $7.2 billion assistance funding to nongovernmental organizations, including at least $1 billion to private voluntary organizations (charities) working overseas. We further noted that, although USAID generally chooses funding mechanisms that delegate a large amount of program control to implementing organizations, it has not compiled data on its use of specific types of funding or evaluated their effectiveness. In addition to hiring third parties to implement its programs, USAID also contracts with outside organizations to provide contract management and oversight of large programs. As we reported in July 2002, the agency hired several firms to manage and oversee some of the contractors and grantees conducting USAID hurricane reconstruction activities in Latin America. At present, USAID is also planning to hire outside parties to oversee the large-scale contracts recently awarded for reconstruction activities in Iraq. Despite the reliance on personal services and institutional contractors, USAID officials maintain that the direct-hire foreign service officer is still the core of mission staffing. He or she works toward achieving U.S. foreign policy goals, gives direction to the country program, brings corporate knowledge and a better understanding of agency guidance to the mission, and provides the authority needed to work effectively with host country counterparts and other U.S. government agencies. The quality and deployment of foreign national contractors can vary among missions and regions. U.S. personal services contractors are an important means for filling mission positions when U.S. direct hires are not available. According to USAID regulations, the terms of their contracts essentially allow personal services contractors to perform almost the same duties as U.S. government employees. About two-thirds work in technical positions, but many serve as program and project development officers, controllers, executive officers, and, occasionally, temporary mission directors— positions that USAID considers inherently governmental. According to USAID officials, as a matter of policy, the agency rarely delegates inherently governmental functions. According to mission officials, U.S. and foreign national contractors are an integral part of the mission workforce, but they cannot replace the agency commitment and experience that U.S. direct-hire foreign service officers bring to the mission. In addition to filling in for U.S. direct-hire staff, contractors, particularly foreign nationals, typically make a career at USAID and provide needed continuity and corporate knowledge of the country programs. However, officials noted that, compared to direct-hire staff, personal services contractors generally do not have the same level of agency commitment; do not fully understand how the agency works and the political pressures that it faces in Washington, D.C.; are not subject to the same degree of accountability; and have limited administrative and decisional authorities. Furthermore, contractors cannot supervise U.S. direct-hire staff, even if the contractor is very experienced and the direct- hire is new to USAID. This further limits the training and mentoring opportunities for new staff. In addition to having reduced the number of U.S. direct hires, USAID now manages programs in more countries with no U.S. direct-hire presence, and its overseas structure has become more regional. Table 1 illustrates the changes in USAID’s U.S. direct-hire overseas presence between fiscal years 1992 and 2002. In fiscal year 1992, USAID managed activities in 88 countries with no U.S. direct-hire presence. According to USAID, in some cases, activities in these countries are very small and require little management by USAID staff. However, in 45 of these countries USAID manages programs of $1 million or more, representing a more significant management burden on the agency. USAID also increasingly provides administrative and program support to countries from regional service platforms, which have increased from 2 to 26 between fiscal years 1992 and 2002. Appendix II contains a complete list of the countries in which USAID operates. USAID’s Environment Affects Workforce Planning Capabilities Our data collection efforts in the field and at headquarters revealed the unique environment in which USAID missions operate and its effect on workforce planning and management efforts. With the exception of Egypt, the missions we visited did not prepare formal and separate workforce plans. USAID missions tend to be relatively small—mission directors and office heads have almost daily contact with the staff and are familiar with their skills and capabilities. Missions conduct their workforce planning and staffing projections in conjunction with their long-term—normally 5- year—country development strategies. Missions provide information on resource needs in their annual reports and budget submissions to their respective regional bureaus. USAID’s Bureau for Policy and Program Coordination allots staff years and funding to the regional bureaus, which then apportion these resources among their headquarters offices and overseas missions. According to the bureau, the average mission has six U.S. direct-hire staff. Officials noted the difficulties in adhering to a formal workforce plan linked to country strategies in an uncertain foreign policy environment. For example, following the events of September 11, 2001, the Middle East and sub-Saharan African missions we visited—Egypt, Mali, and Senegal— received additional work not anticipated when they developed their country development strategies. Mali was seeking two additional personal services contractors during the time of our visit, including one to manage a new hunger initiative for Africa, and Egypt was in the process of determining the staff needed to implement the Middle East Partnership Initiative. In addition, the mission in Ecuador had been scheduled to close in fiscal year 2003. However, this decision was reversed due to political and economic events in Ecuador, including a coup in 2000, the collapse of the financial system, and rampant inflation. Program funding for Ecuador tripled from fiscal year 1999 to fiscal year 2000, while staffing was reduced from 110 to 30 personnel and the budget for the mission’s operating expenses was reduced from $2.7 million to $1.37 million. Other factors unique to USAID’s overseas work environment can affect its ability to conduct workforce planning and attract and retain top staff. These factors vary from country to country and among regions. For example: USAID officials in Mali told us that hardship missions find it much more difficult to attract U.S. staff. Foreign service staff in Mali receive a hardship pay differential rate of 25 percent and an additional 15 percent incentive pay. According to mission officials, these pay incentives are essential for attracting high-quality staff to Mali, and many of the staff with whom we met acknowledged that the extra pay was a factor when they decided to bid on the Mali positions. At several missions we visited—Egypt, Mali, and Peru—USAID officials told us that the salaries set for foreign national employees by the respective embassies make it difficult for missions to recruit and retain the country’s top professional talent. This was particularly true in the poorest countries with limited human resource capacities, such as Mali, where the mission director stated that it is becoming increasingly difficult to compete with international financial institutions and other donor organizations for the country’s most highly qualified professionals. Officials at all the missions we visited said that lengthy clearance processes make it difficult to obtain staff in a timely manner and manage their workforces. U.S. direct-hire staff and personal services contractors must obtain both a security clearance and a medical clearance before they report to work. As we reported in July 2002, in many cases it took 6 months to a year to hire personal services contractors for the emergency hurricane reconstruction program in Latin America, and much of that time was spent waiting for clearances. USAID officials in several countries—particularly Ecuador, Mali, and Senegal—also cited the agency’s separately appropriated operating expense budget as a factor in their ability to support and train U.S. direct- hire staff. USAID missions are supposed to pay for all U.S. direct-hire local expenses—such as housing, dependents’ education, travel, and training—from its operating expense budgets and not from program funds. According to mission officials, operating expense funds have not kept pace with rising fixed costs, such as rent, facilities management, and foreign national salaries and benefits. As a result, missions often opt for contractor staff who can be paid from program funds. In addition, tight operating expense funds and the limited number of U.S. direct-hire staff on board have led some missions to restrict training opportunities for U.S. direct-hire staff. Officials at the Mali and Senegal missions cited availability of training for direct-hire staff as one of their major workforce challenges. Inadequate Attention to Workforce Planning Has Affected USAID’s Ability to Deliver Foreign Assistance Because USAID has not responded to its changing workforce requirements with a strategic workforce planning approach, its ability to carry out its mission has been weakened. In response to a combination of poor technology investments and other budgetary pressures in the mid-1990s, USAID implemented a reduction in force and froze hiring. However, the downsizing was conducted across the board and was not linked to a strategic vision or skills analysis. The agency lost a cadre of experienced foreign service officers and 5 years elapsed before new staff were hired to replace them. USAID officials noted that the downsizings of the last decade have resulted in an insufficient pipeline of junior and mid-level staff with the experience to take on senior positions. As a result, several human capital vulnerabilities have surfaced but have not been systematically addressed. For example: Increased attrition of U.S. direct hires since the reduction in force in the mid-1990s led to the loss of the most experienced foreign service officers, while the hiring freeze stopped the pipeline of new hires at the junior level. The shortage of junior and mid-level officers to staff frontline jobs and a number of unfilled positions have created difficulties at some overseas missions. Having fewer U.S. direct hires managing more programs in more countries has resulted in a workforce that is overstretched, raising concerns about USAID’s ability to provide effective accountability for program results and financial management. As of December 31, 2002, USAID reported it had 631 U.S. direct-hire staff overseas compared to 1,082 at the end of fiscal year 1992. However, USAID has not conducted a comprehensive workload analysis to determine the extent to which staff may be overburdened or unable to perform all required tasks. USAID does not have a “surge capacity” to respond to emergencies; post- conflict situations, such as Afghanistan and Iraq; or new strategic priorities, such as Pakistan and the Middle East. USAID has generally recruited staff for their technical and development expertise, but they spend a significant portion of their time managing contracts and grants, a responsibility for which some staff have limited skills or training. Funding limitations and the shortage of U.S direct hires at USAID missions have curtailed opportunities for on-the-job and formal training and mentoring for both new staff and those taking on the most senior mission positions. Those who have the knowledge and experience have little time for training and mentoring, and the missions do not have enough staff who can cover the tasks of those in training. This has forced USAID to assign increasing numbers of less experienced staff overseas who may not have the essential skills. According to senior USAID officials, the reductions in direct-hire foreign service staff have limited the agency’s ability to plan for emerging development issues because staff must spend most of their time preparing paperwork and monitoring activities. For example, USAID did not have adequate staff with the knowledge, skills, and abilities to quickly deal with such emerging issues as famine and human immunodeficiency virus/acquired immune deficiency syndrome. Although the agency eventually caught up with these issues, its ability to anticipate development trends and demands has decreased. USAID’s Progress in Implementing Strategic Workforce Planning Principles Is Limited USAID does not have a systematic method for determining its workforce needs and for implementing strategies that will enable its staff to meet the agency’s numerous challenges and accomplish its strategic mission. USAID is making limited progress in addressing the four principles for effective strategic workforce planning that we identified as key practices (see fig. 3). Involvement of Agency Leaders, Employees, and Stakeholders in Strategic Workforce Planning Is Limited USAID’s senior management is developing a human capital strategy to respond to the President’s Management Agenda, but it has not identified how it will significantly involve employees and other stakeholders in developing and communicating the workforce strategies that result from its efforts. We found that strategic workforce planning is most likely to succeed if an agency’s leadership sets the overall direction and goals and involves employees and stakeholders in developing, communicating, and implementing workforce and human capital strategies. During the 1990s, USAID’s downsizing efforts and budgetary constraints took precedence over strategic workforce planning. Its human resource office was understaffed and lacked experience in strategic workforce planning, focusing mostly on collecting workforce data and hiring to replace staff lost through attrition. USAID’s leadership has attempted to reform its management systems. It established the Business Transformation Executive Committee in February 2002 to comply with the President’s Management Agenda’s initiatives, and it appointed a Chief Human Capital Officer in May 2003 as required by the Chief Human Capital Officers Act of 2002. In December 2002, the business transformation committee formed a human capital subcommittee, consisting of senior program and human resource officials, to develop USAID’s human capital strategy. USAID’s human capital strategy has not been finalized or approved by the Office of Management and Budget and the Office of Personnel Management. In addition, we were unable to determine whether the draft human capital strategy is linked to the agency’s overall strategic plan, as we recommended in 1993. USAID’s strategic plan for fiscal years 2004 through 2009—a joint plan with the State Department—is also in draft and not planned for issuance until the end of fiscal year 2003. In addition to its human capital strategy development, as part of the effort to comply with the President’s Management Agenda, the USAID Administrator established a group in January 2003 to develop criteria for overseas staffing and to rationalize the deployment of foreign service officers overseas. The group subsequently developed—and the Administrator approved—a template for staffing overseas missions that gives most weight to the dollar size of the country program but also considers the relative performance of the host governments and provides some flexibility to the regional bureaus as necessary. Involving employees and stakeholders in the strategic workforce planning process is also important to encourage support and understanding for its outcomes. According to a USAID senior official, the human capital subcommittee has involved some internal groups in the planning process through the use of working groups that include senior and mid-level employees. In addition, according to USAID officials, the Administrator has discussed his human capital initiatives and answered questions at “town hall” meetings with USAID employees. However, because the strategy is still in draft, we cannot comment on the extent to which agency staff will be involved in implementing the strategy. Historically, USAID has established many internal committees to analyze workforce planning problems but has not always followed through in implementing their recommendations. In addition, according to senior USAID officials, the agency has not included nongovernmental organizations and other implementing partners in the development of its human capital strategy. USAID’s Efforts to Identify Critical Skills and Competencies Are Limited by Inadequate Personnel Information USAID has begun to identify the core competencies needed by its workforce, and it recently established a working group to conduct workforce analysis and planning related to core USAID competencies. However, it has not documented the critical skills and competencies of its current workforce, and its personnel information system does not always provide reliable and timely data. USAID must determine the critical skills and competencies required to meet its current and anticipated strategic program goals. This is especially important as changes in national security, technology, and other factors alter the environment within which foreign policy agencies operate. In addition, like many other federal agencies, USAID’s workforce is increasingly eligible for retirement, creating an opportunity to refocus its workforce competencies to those geared toward the critical skills and competencies it will need in the future. To meet these challenges effectively, USAID needs to know its present workforce skills and competencies and identify those that are critical to achieving its strategic goals. Effective workforce planning and management require that human capital staff and other managers base their workforce analyses and decisions on complete, accurate, and timely personnel data. However, USAID’s personnel information system is not entirely accurate and does not contain all of the information USAID needs for sound workforce decision making. For example, a recent audit of USAID’s human capital data by its Inspector General found that the data collected through this automated process were not current, consistent, totally accurate, or complete. USAID is attempting to address its personnel information system problems through a recently implemented Web-based application that will allow for customized, centralized, and real-time reporting. As of mid-June 2003, about 85 percent of the missions had submitted data through the new system. Officials at the human resource office, in responding to a draft of this report, expect this new system to be fully operational in time to generate the September 30, 2003, worldwide staffing report. Nevertheless, USAID has no systematic or agencywide method to determine the skills and competencies of its current staff. Although the new personnel database will provide better information on the locations and position categories of its staff, it is not designed to identify current critical skills and competencies. According to officials from the human resource office, the new system will provide the position occupational code for all employees, but this will not include information on current skills and abilities. However, as part of its draft human capital strategy, in June 2003 USAID established a team to carry out a comprehensive workforce analysis and planning effort. The team will first develop a pilot workforce plan, including an analysis of current skills and future needs, in three headquarters offices—human resources, procurement, and global health. One of the working group’s tasks is to identify an appropriate tool to collect, store, and manage data on competencies, training, and career development. USAID’s overseas assessment team also developed findings and made a number of recommendations regarding USAID’s model for delivering assistance and the types of skills that the agency will need to meet future program needs. For example, according to the team’s study, USAID’s foreign service recruitment should focus more on basic agency operational skills such as program and project development, financial management, procurement, and legal expertise. The study notes that these abilities are essential for missions in developing programs, policies, and strategies; ensuring accountability; and representing U.S. government interests with host government officials and other stakeholders. Furthermore, according to the study, due to the shortage of U.S. direct-hire foreign service staff, about 160 personal services contractors currently serve in these positions, which USAID considers inherently governmental functions and normally fills with U.S. direct-hire staff. USAID’s Strategies to Address Critical Skill Gaps Are Not Comprehensive Although USAID has implemented some recruitment strategies to address attrition concerns and staffing gaps, the strategies are limited to certain segments of the workforce. Furthermore, USAID cannot be certain that these measures will be effective, because the recruitment plans have not been based on analyses that match current skills with those needed to meet future strategic goals. Our strategic human capital model stresses the importance of developing human capital strategies—the programs, policies, and processes that agencies use to build and manage their workforces—that are tailored to agencies’ unique needs. Applying this principle to strategic workforce planning means that agencies consider how hiring, training, staff development, performance management, and other human capital strategies can be used to eliminate gaps and gain the critical skills and competencies needed in the future. USAID has implemented specific workforce strategies for some segments of its workforce to address shortages in critical skills and competencies, but these efforts are not comprehensive. Since fiscal year 1999, USAID has hired more than 200 mid-level foreign service officers through its New Entry Professionals program and 47 civil service employees through the Presidential Management Intern program. The agency recently reinstituted its International Development Intern program for junior foreign service officers and plans to make 15 offers in March 2004. According to USAID officials, the agency is hiring staff with updated technical and management skills. These measures are important efforts to bring in experienced mid- level staff and junior staff with new skill sets that can help shape the agency’s future as the current workforce becomes eligible for retirement. However, USAID has not developed a workforce plan for its civil service staff—a factor noted by OMB in its Presidential Management Agenda “scorecard” of USAID’s human capital management efforts. In responding to our draft report, USAID stated that it will not refine its civil service recruitment plan until its workforce analysis is complete. In the meantime, the current civil service plan calls for hiring about 15 Presidential Management Interns a year, allowing offices to replace civil service staff in accordance with approved reorganization plans, and hiring above established ceilings for critical staff needs, such as contracts officers. USAID also has not yet developed workforce plans for its personal services contractors, who make up the majority of its workforce, although USAID plans to do so in response to a recommendation by its Inspector General. In addition to its recruitment efforts, USAID has revived certain training programs that were halted during the 1990s, such as executive leadership training and management programs. However, these target mostly senior management, according to a USAID survey of civil service employees. In responding to our draft, USAID noted that its leadership training is conducted at three levels—emerging, senior, and executive—and that its challenge is to broaden such training and make it more available. USAID also noted that it needs to offer entry-level training to all staff, not just foreign service officers. In addition, the agency is revising its training curriculum to provide more online training opportunities for all staff. USAID’s personnel information system has not always provided accurate data, and the agency has not undertaken a comprehensive analysis of the skills and competencies of its current staff and matched this data to future requirements. As a result, USAID cannot ensure that its recruitment plans accurately reflect its hiring needs. Since 1999, missions have been required to submit staffing projections as part of USAID’s annual report and budget justification process. The human resource office uses this information to develop its annual foreign service recruitment and training plans. According to a human resource official, the recent overseas staffing assessment will result in better guidance to the field and to headquarters offices on reporting their staffing needs. USAID Has Not Created a System to Monitor and Evaluate Its Progress Because USAID’s human capital strategy is still in draft and not yet approved, we cannot comment on whether its action plan will have specific timetables and indicators to evaluate its progress in meeting its human capital goals and to help ensure that these efforts continue under the leadership of successive administrators. Strategic workforce planning entails the development and use of indicators to measure both the progress in achieving human capital goals and how the outcomes of these strategies can help an organization accomplish its mission and programmatic goals. USAID has had difficulties in defining practical, meaningful measures that assess the impact of human capital strategies on program results. For example, USAID’s fiscal year 2002 performance plan continues to emphasize the agency’s efforts to achieve activity-oriented goals, such as the number of employees hired or trained, but these measures do not help gauge how well USAID’s human capital efforts helped the agency achieve its programmatic goals. As a result, the link between specific human capital strategies and strategic program outcomes is not clear. Conclusions USAID has evolved from an agency consisting primarily of U.S. direct-hire foreign service officers who directly implemented development projects to one in which foreign service officers manage and oversee development programs and projects carried out by institutional contractors and grantees. Since 1992, the number of U.S. direct-hire staff has decreased by 37 percent, but the number of countries with USAID programs has almost doubled. In addition, USAID program funding increased 57 percent in fiscal years 2002 and 2003. As a result, USAID has increasingly relied on contractor staff—primarily personal services contractors—to manage its day-to-day activities overseas. In addition to having fewer U.S. direct-hire foreign service officers to provide direction and accountability for its foreign assistance programs, USAID operates in overseas environments that present unique challenges to its ability to manage a quality workforce. With fewer and less experienced U.S. direct-hire staff managing increasing levels of foreign assistance in more countries, along with expected increases in program funds for Afghanistan and Iraq, significant funding increases for the global initiative to fight human immunodeficiency virus/acquired immune deficiency, and potential USAID involvement in the Millennium Challenge Account, USAID’s ability to provide oversight over its foreign assistance activities and pursue U.S. foreign policy objectives is becoming increasingly difficult. Because USAID has not adopted a strategic approach to workforce planning and management, it cannot ensure that it has addressed these challenges appropriately and identified the right skill mix and competencies needed to carry out its development assistance programs. Recommendations for Executive Action To help ensure that USAID can identify its future workforce needs and pursue strategies that will help its workforce achieve the agency’s goals, we recommend that the USAID Administrator develop and institutionalize a strategic workforce planning and management system that reflects current workforce planning principles. This effort should include the implementation of a reliable personnel information system, an agencywide assessment of staff’s skills and abilities, workforce strategies that address identified staffing gaps in the foreign and civil services, and a periodic evaluation of how these efforts contribute toward the achievement of the agency’s program goals. Scope and Methodology To determine how workforce changes have affected USAID’s ability to carry out its mission, we reviewed the agency’s workforce planning documents and a number of internal and external reports on USAID’s human capital and workforce planning issues. We also interviewed knowledgeable USAID officials representing the agency’s regional, technical, and management bureaus in Washington, D.C., and conducted fieldwork at seven overseas missions—the Dominican Republic, Ecuador, Egypt, Mali, Peru, Senegal, and the West Africa Regional Program in Mali. To ensure consistency in our data collection efforts, we used the same data collection instrument at each location. We also administered a separate data collection instrument at USAID’s human resource office in Washington, D.C. In examining the changes in USAID’s workforce since 1990, we analyzed personnel data provided by USAID and internal and external reports on the changing roles of USAID’s workforce. We did not formally verify the accuracy of USAID’s data; however, we noted in our findings that USAID’s personnel data were not entirely accurate, complete, or up to date. To examine USAID’s progress in developing and implementing a strategic workforce planning system, we evaluated the agency’s efforts in terms of principles used by leading organizations that we identified through our work with the Office of Personnel Management, other U.S. government agencies, the National Academy for Public Administration, and the International Personnel Management Association. We analyzed USAID’s workforce planning documents, reviewed internal and external reports on its human capital and workforce planning efforts, and interviewed cognizant USAID officials at its Bureau for Management and its Bureau for Policy and Program Coordination. We conducted our work between July 2002 and June 2003 in accordance with generally accepted government auditing standards. Agency Comments and Our Evaluation USAID provided written comments on a draft of this report (see app. III). It concurred with our major findings and recommendations and noted that our work grasped the agency’s complex human capital situation. USAID also reiterated that it recently established a working group to carry out an integrated workforce analysis and planning effort. According to USAID, this effort will assess the critical skills and competencies of its workforce, identify the gaps between what the agency currently has and what it will need in the future, and design workforce strategies to fill those gaps. USAID also provided separate technical comments on our draft, which we have incorporated as appropriate. As arranged with your office, we plan no further distribution of this report for 30 days from the date of the report unless you publicly announce its contents earlier. At that time, we will send copies to interested congressional committees and to the Administrator, USAID; the Secretary of State; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no extra charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or at FordJ@gao.gov. Other contacts and staff acknowledgments are listed in appendix IV. Appendix I: Selected Reports Related to USAID’s Workforce Planning Strengthen workforce planning and better planning, greater focus on recruitment, more training opportunities, and increased retention. link workforce planning to the agency’s mission. Human resources department is not effective. Develop definitions and requirements so and maintained was not up to date, consistent, totally accurate, or complete. reported data is on time, consistent, accurate, and complete. Provide training for staff members who enter and correct personnel data. Develop procedures for missions to attest to the accuracy of their workforce data. Institute process to collect data on the reasons for employee attrition. Develop workforce plans for USAID’s civil and nondirect-hire workforce. USAID Office of the Inspector General, Audit of USAID’s Workforce Planning for Procurement Officers, Audit Report Number 9-000-03-001-P (Washington, D.C.: Nov. 13, 2002). USAID has not developed a comprehensive workforce plan that covers its entire procurement workforce. USAID needs to develop a comprehensive workforce plan for the USAID procurement workforce. U.S. General Accounting Office, Foreign Assistance: Disaster Recovery Program Addressed Intended Purposes but USAID Needs Greater Flexibility to Improve Its Response Capability, GAO-02-787 (Washington, D.C.: July 24, 2002). affected the initial pace of program implementation. USAID lacked a “surge capacity” to quickly design and implement a large- scale program with relatively short time frames. Implement procedures to (1) allow USAID to quickly reassign key personnel, (2) allow missions to expedite the hiring of contractor staff, and (3) facilitate coordination with other U.S. government agencies involved in reconstruction. Foreign and civil service new hires must precedence over long-term workforce planning needs. possess managerial and analytical skills. Lack of training. hired and what should be contracted. Limited hiring of entry-level staff. Agency’s program budget increased, but its workforce decreased for 10 years. USAID has significant barriers to workforce planning. Establish a workforce planning committee to identify program needs on a continuing basis. Reduce the number of personnel backstops and develop qualifications in multiple skills categories. Create an inventory of staff work history, experience, skills, and abilities. Adopt a single unified personnel system. U.S. General Accounting Office, Foreign Assistance: AID Strategic Direction and Continued Management Improvements Needed, GAO/NSIAD-93-106 (Washington, D.C.: June 11, 1993). Develop and implement a comprehensive changing. Workforce lacked needed skills. USAID had not adequately planned for workforce planning and management capability as a systematic, agencywide effort. workforce needs. Institutionalize this capability to ensure its continuation by successive administrations. Ineffective placement, training, and recruitment constrained workforce management. Determine the desired general composition workforce planning system. of the direct-hire workforce and develop a plan for reshaping the workforce along those lines. Conduct an individual skills profile of the existing workforce and analyze it in the context of the desired general composition of the direct-hire workforce. system to simplify, reduce, and thereby broaden categories. Appendix II: USAID Worldwide Foreign Assistance Programs Locations Africa Region Eastern Caribbean and Windward Islands Grenada Montserrat St. Kitts and Nevis St. Lucia St. Vincent and the Grenadines Suriname Trinidad and Tobago Turks and Caicos Islands Uruguay Venezuela USAID Regional Service Platforms. Services include legal, executive office, financial/controller, procurement, and program and project development support services. Services vary among the 26 platforms; for example, the regional office in Kenya provides all services to up to 14 countries, while the Honduras mission simply shares a contracts officer with Nicaragua. Jordan Pakistan Philippines Thailand - Regional Development Office (planned) Bolivia (La Paz) Dominican Republic (Santo Domingo) El Salvador (San Salvador) Guatemala (Guatemala City) Honduras (Tegucigalpa) Peru (Lima) Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the above named individuals, Kimberley Ebner, Jeanette Espinola, and Rhonda Horried made key contributions to this report. Martin de Alteriis, Mark Dowling, Reid Lowe, and José M. Peña, III, provided technical assistance.
The U.S. Agency for International Development (USAID) oversees humanitarian and economic assistance--an integral part of the U.S. global security strategy--to more than 160 countries. GAO recommended in 1993 that USAID develop a comprehensive workforce plan; however, human capital management continues to be a high-risk area for the agency. GAO was asked to examine how changes in USAID's workforce over the past 10 years have affected the agency's ability to deliver foreign aid and to assess its progress in implementing a strategic workforce planning system. USAID has evolved from an agency in which U.S. direct-hire staff directly implemented development projects to one in which U.S. direct-hire staff oversee the activities of contractors and grantees. Since 1992, the number of USAID U.S. direct-hire staff declined by 37 percent, but the number of countries with USAID programs almost doubled and, over the last 2 years, program funding increased more than 50 percent. As a result of these and other changes in its workforce and its mostly ad-hoc approach to workforce planning, USAID faces several human capital vulnerabilities. For example, attrition of experienced foreign service officers and inadequate training and mentoring have sometimes led to the deployment of staff who lack essential skills and experience. The agency also lacks a "surge capacity" to respond to evolving foreign policy priorities and emerging crises. With fewer and less experienced staff managing more programs in more countries, USAID's ability to oversee the delivery of foreign assistance is becoming increasingly difficult. USAID has taken steps toward developing a workforce planning and human capital management system that should enable the agency to meet its challenges and achieve its mission in response to the President's Management Agenda, but it needs to do more. For example, USAID has begun its workforce analysis but it has not yet conducted a comprehensive assessment of the skills and competencies of its current workforce and has not yet included its civil service and contracted employees in its workforce planning efforts. Because USAID has not adopted a strategic approach to workforce planning, it cannot ensure that it has addressed its workforce challenges appropriately and identified the right skill mix to carry out its assistance programs.
Background DOD has undergone four BRAC rounds since 1988 and is currently implementing its fifth round. For the most recent BRAC round—referred to in this report as the BRAC 2005 round—DOD applied legally mandated selection criteria that included four criteria related to military value as well as other criteria regarding costs and savings, economic impact to local communities, community support infrastructure, and environmental impact, as shown in figure 1. In applying these BRAC 2005 selection criteria, priority consideration was given to military value. In fact, as required by BRAC legislation, military value was the primary consideration for making BRAC recommendations, as reported by both DOD and the BRAC Commission. DOD also incorporated into its analytical process several key considerations required by BRAC legislation, including the use of certified data and basing its analysis on its 20-year force structure plan. In commenting on DOD’s BRAC process in July 2005, we reported that DOD established and generally followed a logical and reasoned process for formulating its list of BRAC recommendations. Using this analytical process, the Office of the Secretary of Defense (OSD) provided over 200 BRAC recommendations to the BRAC Commission for an independent assessment in May 2005. The BRAC Commission had the authority to change the Secretary’s recommendations if it determined that the Secretary deviated substantially from the legally mandated selection criteria and DOD’s force structure plan. After assessing OSD’s recommendations, the BRAC Commission stated that it rejected 13 recommendations in their entirety and significantly modified another 13. Ultimately, the BRAC Commission forwarded a list of 182 recommendations for base closure or realignment to the President for approval. The BRAC Commission’s recommendations were accepted in their entirety by the President and Congress and became effective November 9, 2005. The BRAC legislation requires DOD to complete recommendations for closing or realigning bases made in the BRAC 2005 round within a 6-year time frame ending September 15, 2011, 6 years from the date the President submitted to Congress his approval of the recommendations. To provide a framework for promoting consistency in estimating the costs and savings associated with various proposed BRAC recommendations, DOD used an estimation model, known as the Cost of Base Realignment Actions (COBRA). The COBRA model has been used in the base closure process since 1988. It provided important financial information to the selection process as decision makers weighed the financial implications for various BRAC actions along with military value and other selection criteria when arriving at final decisions regarding the suitability of BRAC recommendations. In addition, the department designed the model to calculate estimated costs and savings associated with actions that are necessary to implement BRAC recommendations over the 6-year implementation period and to calculate recurring costs or savings thereafter. As such, the BRAC Commission continued to use DOD’s COBRA model for making its cost and savings estimates. The COBRA model relies to a large extent on standard factors and averages but is not intended to—and consequently does not—represent budget-quality estimates. As a result, neither DOD’s or the BRAC Commission’s COBRA-generated estimates can be assumed to represent the actual completion costs that Congress will need to fund through appropriations or fully reflect the savings to be achieved after implementation. We have examined COBRA in the past and have found it to be a generally reasonable estimator for comparing potential costs and savings among candidate alternatives but have not considered it a tool for use in budgeting. In the intervening years, COBRA has been revised to address certain problems we and others have identified after each round. As with any model, the quality of the output is dependent on the quality of the input. For example, a DOD analyst could assume a building could be renovated to accommodate receiving personnel; however, when BRAC implementation began, site surveys showed that the building could not be renovated, thus requiring new construction that increased estimated costs. The model provides a standard quantitative approach to comparing estimated costs and savings across various proposed recommendations. In this and previous BRAC rounds, DOD subsequently developed budget- quality estimates once BRAC recommendations became effective. Thus, the BRAC Commission’s estimated implementation costs and savings were useful for comparing candidate recommendations and DOD has subsequently refined these estimates based on better information after conducting site surveys. BRAC legislation requires DOD to submit an annual schedule containing revised BRAC cost and savings estimates for each closure and realignment recommendation to Congress. To meet this legislative requirement, DOD presents its schedule in its annual BRAC budget submission to Congress. For BRAC 2005 recommendations, DOD’s first presentation of its cost and savings schedule was in its fiscal year 2007 budget submission to Congress in March 2006. However, the department stated in its submission that it did not have enough time to formulate a reasonable BRAC budget and that the fiscal year 2007 BRAC budget submission contained significant funding shortfalls. DOD’s second presentation of its cost and savings schedule was its fiscal year 2008 BRAC budget submission to Congress in February 2007. For the BRAC 2005 round, the OSD BRAC Office—under the oversight of the Under Secretary of Defense for Acquisition, Technology and Logistics—has monitored the services’ and defense agencies’ implementation progress, analyzed budget justifications for significant differences in cost and savings estimates, and facilitated the resolution of any challenges that may impair the successful implementation of the recommendations within the 6-year completion period. To facilitate its oversight role, OSD required the military departments and certain defense agencies to submit a detailed business plan for each of their recommendations. These business plans include information such as a listing of all actions needed to implement each recommendation, schedules for personnel movements between installations, updated cost and savings estimates based on better and updated information, and implementation completion time frames. OSD’s general process for reviewing business plans is shown in figure 2. OSD BRAC officials consider their business plans to be living documents that will evolve over the course of the 6-year implementation period. Additionally, OSD’s General Counsel assesses whether the business plans meet the intent of the BRAC Commission’s recommendations. DOD Plans to Spend More and Save Less Than Originally Estimated and Will Take Longer Than Expected to Recoup Up-Front Costs Compared to the BRAC Commission’s estimates, DOD plans to spend more and save less to implement BRAC recommendations than the BRAC Commission originally estimated, and it will take longer than expected for DOD to recoup its up-front costs. Since the BRAC Commission issued its cost and savings estimates in 2005, DOD’s reported estimates of the costs to implement about 180 BRAC recommendations have increased by $10 billion to about $31.2 billion while annual savings estimates have decreased by about $200 million—$4.2 billion to $4 billion. Moreover, our analysis of DOD’s current estimates shows that it will take until 2017 for the department to recoup its up-front costs to implement BRAC recommendations—4 years longer than the BRAC Commission’s estimates indicate this would happen. Similarly, whereas the BRAC Commission estimated that the implementation of the BRAC 2005 recommendations would save DOD about $36 billion over a 20-year period ending in 2025, BRAC implementation is now expected to save about $15 billion, a decrease of 58 percent. DOD Plans to Spend More and Save Less Than Originally Estimated Since the BRAC Commission issued its cost and savings projections in 2005, cost estimates to implement the BRAC 2005 recommendations have increased from $21 billion to $31 billion (48 percent) compared to the BRAC Commission’s reported estimates and net annual recurring savings estimates have decreased from $4.2 billion to $4 billion (5 percent) compared to the BRAC Commission’s reported estimates as shown in table 1. A comparison of the BRAC Commission’s reported projections with DOD’s data shows that estimated implementation costs have increased by $10.1 billion or 48 percent and estimated net annual recurring savings have decreased by $212 million or 5 percent. However, another way to compare expected BRAC costs and saving is by omitting the effects of inflation. We found that using the same constant dollar basis as used by the BRAC Commission—meaning inflation is not considered—DOD’s estimated one- time costs to implement BRAC increased to about $28.6 billion or 36 percent in constant dollars and estimated net annual recurring savings decreased to about $3.4 billion or 20 percent in constant dollars compared to the BRAC Commission’s reported estimates. We found that estimated military construction costs accounted for about 64 percent of the increase in expected BRAC one-time costs. Specifically, the BRAC Commission estimated that to implement the BRAC recommendations, military construction costs would be about $13 billion, whereas DOD’s current estimates for military construction, without inflation, were about $20 billion. We estimated that inflation accounted for about 25 percent, or about $2.6 billion of the increase in expected one-time costs. This mostly occurred because the BRAC Commission presented its estimates using constant fiscal year 2005 dollars, which does not include the effects of projected inflation, whereas DOD’s budgeted estimates were presented in current (inflated) dollars because budget requests take into consideration projected inflation. Further, the BRAC Commission estimates did not include projected environmental cleanup costs for BRAC-affected bases, which is a consistent practice with past BRAC rounds because DOD is required to perform needed environmental cleanup on its property whether a base is closed, realigned, or remains open. Environmental cleanup added about 6 percent, or about $590 million in expected costs. Finally, other projected expenses such as operation and maintenance accounted for about 5 percent or about $500 million of the increase in expected costs. Because the BRAC Commission’s data do not include some specific budget categories that are used in the DOD BRAC budget, we could not make direct comparisons and precisely identify all estimated cost and savings changes. Estimated One-time Costs Have Increased Our analysis shows that estimated one-time costs to implement 33 BRAC recommendations, representing nearly 1/5 of all the BRAC recommendations for this round, increased by more than $50 million each compared to the BRAC Commission’s estimates. (See app. II for a listing of these recommendations.) DOD’s expected costs to implement 6 of these recommendations increased by a total of about $4 billion. Specifically, we found about: $970 million increase in the estimated costs of consolidating various leased locations and closing other locations of the National Geospatial- Intelligence Agency to Fort Belvoir, Virginia, largely because the agency identified the need for additional supporting facilities, such as a technology center and additional warehouse space, as well as increased costs for information technology and furnishings to outfit the new buildings. According to OSD’s business plan, the COBRA analysis of specific costs and the number of personnel to realign were classified; $700 million increase in the estimated costs of realigning Walter Reed Army Medical Center, D.C., and relocating medical care functions to the National Naval Medical Center, Bethesda, Maryland, and Fort Belvoir, Virginia, largely because planning officials identified the need for additional space and supporting facilities at the receiving installations that increased estimated military construction costs by almost $440 million. Most of these estimated cost increases are expected to occur at the National Naval Medical Center, Maryland, because of increased requirements to renovate existing facilities, such as the medical center. Additionally, several other facilities, such as a parking structure and a larger than-initially-expected addition to the medical center, increased the construction cost estimates as well; $680 million increase in the estimated costs of relocating the Army’s armor center and school from Fort Knox, Kentucky, to Fort Benning, Georgia, to support the creation of a new maneuver school, largely because the Army identified the need for about $400 million in construction of several facility projects, such as training ranges, instructional facilities, barracks, medical facilities, and a child development center that were not in the initial estimates. Also, the Army identified the need for about $280 million more in infrastructure support, such as water, sewer, and gas lines, as well as roads to support the new maneuver school at Fort Benning; $680 million increase in the estimated costs of closing Fort Monmouth, New Jersey, largely because of increases in expected military construction costs, such as $375 million at Aberdeen Proving Ground, which is to receive many of the missions from the planned closure of Fort Monmouth and for additional facilities, such as a communications equipment building and an instructional auditorium. Also, the Army identified the need for additional infrastructure improvements at Aberdeen such as utilities, roads, and information technology upgrades. The Army determined that its military construction estimates would increase because the existing facilities at Aberdeen could not accommodate an increase in size of Fort Monmouth’s Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance mission as originally estimated. Moreover, military construction costs to relocate the U.S. Army Military Academy Preparatory School from Fort Monmouth to West Point, New York, increased about $175 million largely because the scope of the facility construction increased from approximately 80,000 square feet to more than 250,000 square feet and planning officials identified the need to spend about $40 million to prepare the site for construction, particularly for rock removal, given the terrain at West Point. Also, DOD’s cost estimates for environmental cleanup at Fort Monmouth have increased by more than $60 million; $600 million increase in the estimated costs of co-locating miscellaneous OSD, defense agency, and field activity-leased locations to Fort Belvoir and Fort Lee, Virginia, largely due to increases in military construction cost due to the identification of various required facilities at the receiving installations not included in the original estimate. For example, construction costs increased because it was determined a structured parking garage costing about $160 million would be needed to accommodate the increase in personnel with parking needs compared to the original nearly $3 million estimate for a flat surface parking lot. An additional estimated cost increase of nearly $50 million is needed to cover the costs for a heating and cooling plant and various safety and antiterrorism protection features. Estimated costs also increased by more than $160 million to implement this recommendation for increased information technology needs; and $550 million increase in the estimated costs of establishing the San Antonio Regional Medical Center and realigning enlisted medical training to Fort Sam Houston, Texas, largely because planning officials identified additional requirements to move medical inpatient care functions from Wilford Hall Medical Center at Lackland Air Force Base, Texas to Fort Sam Houston, including operating rooms and laboratory facilities not included in the original estimate. Additionally, requirements for more instructional and laboratory space increased to accommodate an increase in the number of students expected to receive medical training at Fort Sam Houston. Based on the services conducting additional analysis and using other planning assumptions, the number of students now expected to arrive at Fort Sam Houston for medical enlisted training increased by more than 2,700 (44 percent)—from about 6,270 students to approximately 9,000 students. BRAC implementing officials told us that information gained from site visits, such as better information on the actual condition and availability of certain facilities, was a key factor as to why the department’s estimates changed from the BRAC Commission’s estimates. For example, DOD’s estimated cost increased over earlier projections as a result of better data becoming available on the realignment of the Army Forces Command headquarters due to the closure of Fort McPherson, Georgia. These data showed the Command realigned to Fort Bragg and Pope Air Force Base, North Carolina would be located in over 20 different buildings. The Army decided, therefore, to preserve existing operational efficiencies by keeping the entire Command intact in one location, as it is now at Fort McPherson, by building a new facility at Fort Bragg although this plan led to the increase in expected costs to implement the recommendation. Moreover, data for some recommendations changed as certain requirements became better defined over time. For example, personnel requirements related to the recommendation to activate a brigade combat team and its associated headquarters unit at Fort Hood, Texas, and then relocate it to Fort Carson, Colorado, became better defined after the BRAC Commission made its estimates. During the BRAC decision-making process in 2005, the Army planned its facility requirement on about 3,200 soldiers per brigade combat team but subsequently increased the personnel requirement to 3,900 soldiers per brigade combat team as it budgeted for needed facilities in formulating the fiscal year 2008 BRAC budget submission. Likewise, the personnel requirement in providing facilities for an associated headquarters unit increased from 300 soldiers in the initial analysis to 900, thus increasing the expected costs. Thus, the number of personnel to be accommodated at Fort Carson in order to implement this BRAC recommendation increased by 37 percent from what was initially expected, which in turn increased the size of the facilities necessary to house the additional soldiers expected to arrive at Fort Carson, leading to an increase in expected cost to implement this recommendation. As in all previous BRAC rounds, the BRAC Commission used DOD’s COBRA model to generate its estimates. Both we and the BRAC Commission acknowledged in our respective BRAC 2005 reports that the COBRA model, while valuable as a comparative tool, does not provide estimates that DOD is expected to use in formulating the BRAC budget and against which Congress will appropriate funds. We have stated that COBRA does not necessarily reflect with a high degree of precision the actual costs or savings that are ultimately associated with BRAC implementation. We have also stated that the services are expected to refine COBRA estimates following the BRAC decision-making process to better reflect expected costs and savings using site-specific information. While COBRA estimates do not reflect the actual costs and savings ultimately attributable to BRAC, we have recognized in the past and continue to believe that COBRA is a reasonably effective tool for the purpose for which it was designed—to aid in BRAC decision making—and that the BRAC Commission’s COBRA-generated estimates are the only reasonable baseline to use to identify BRAC cost and savings changes since the recommendations became effective. Savings Estimates Have Decreased Our analysis shows that estimated net annual recurring savings to implement 13 BRAC recommendations decreased by more than $25 million each compared to the BRAC Commission’s estimates. (See app. III for a listing of these recommendations.) The BRAC Commission estimated that BRAC 2005 would result in net annual recurring savings of $4.2 billion beginning in fiscal year 2012; however, we calculated that the net annual recurring savings have decreased to $4 billion (5 percent). DOD attributed the decrease in its savings estimate primarily to changes in initial assumptions or plans. We identified several BRAC recommendations for which savings estimates decreased compared to the BRAC Commission’s estimates. Specifically, we found about: $90 million decrease in the estimated savings of closing various leased locations of the National Geospatial-Intelligence Agency and realigning other locations to Fort Belvoir, Virginia. Initially, officials at the National Geospatial-Intelligence Agency and the OSD BRAC Office explained that fewer personnel eliminations caused some of the decrease in savings. Additionally, the day before we released this draft for comment, an OSD BRAC Office official explained to us that they underreported the estimated savings from expected lease terminations in the fiscal year 2008 BRAC budget submission. However, time did not permit us to analyze this information. $80 million decrease in the estimated savings of closing three chemical demilitarization depots (Deseret Chemical Depot, Utah; Newport Chemical Depot, Indiana; and Umatilla Chemical Depot, Oregon), largely because the Army expects not to close these facilities within the BRAC statutory implementation time frame because DOD must complete the chemical demilitarization mission to comply with treaty obligations before these facilities can close, which resulted in less expected savings; $70 million decrease in the estimated savings of establishing joint bases at multiple locations, largely because the Army did not include its share of the expected savings due to unresolved issues concerning joint base operations, whereas the other services included the COBRA-generated savings in DOD’s BRAC budget submission to Congress. OSD had not approved the business plan for this recommendation; thus additional information on expected savings was not available for us to review; and $50 million savings decrease in realigning the Defense Logistics Agency’s supply, storage, and distribution network, largely because of the need to retain higher inventory levels than anticipated and less personnel elimination. DOD Will Take Longer to Recoup Up-Front Costs Than the BRAC Commission Expected DOD’s current estimates to implement the BRAC recommendations show that it will take until 2017 for the department to recoup its up-front costs—4 years longer than the BRAC Commission’s estimates indicated it would take for DOD’s up-front investments to begin to pay back. Historically, it has taken DOD about 6½ years to recoup up-front costs for actions such as constructing new facilities, providing civilian severance pay, or moving personnel and equipment as a result of implementing BRAC recommendations. Our analysis of the BRAC Commission’s estimates shows that the time required to recoup such costs would be 8 years, or in 2013. However, using DOD’s current estimates, our analysis shows that the time required to recoup costs would be 12 years, or in 2017, as shown in figure 3. Similarly, because DOD expects to spend more and save less compared to the BRAC Commission’s estimates, projected 20-year savings have decreased by more than half. The BRAC Commission estimated that the implementation of this BRAC round would save about $36 billion over a 20-year period ending in 2025. However, based on our analysis of DOD’s current estimates, implementation of this BRAC round will save about $15 billion, a decrease of $21 billion (58 percent), in fiscal year 2005 constant dollars. OSD BRAC officials told us that, although the 20-year savings estimate is less than the BRAC Commission expected, the department expects the implementation of this BRAC round to produce capabilities that will enhance defense operations and management, despite less than anticipated savings. Moreover, DOD expects a majority of the expected costs and savings to be related to the implementation of a small percentage of the BRAC recommendations. For example, we determined that DOD expects the implementation of about 13 percent of the recommendations to incur 65 percent of the expected one-time costs (see app. IV); 15 percent of the recommendations to generate 85 percent of the expected annual recurring savings (see app. V); and 16 percent of the recommendations to generate 85 percent of the expected 20-year savings (see app. VI). DOD’s Estimates to Implement BRAC Recommendations Will Likely Continue to Evolve, and Savings Estimates May be Overstated Based on our analysis, we believe DOD’s cost and savings estimates to implement the BRAC 2005 recommendations are likely to continue to evolve in future BRAC budget submissions. First, DOD’s estimates for some key recommendations are uncertain because they are based on implementation details that are still evolving, especially for some complex recommendations such as establishing 12 new joint bases. Second, military construction costs could increase due to various economic factors and a possible readjustment of Army construction costs. Third, environmental cleanup costs for BRAC implementation are preliminary and are likely to increase. Furthermore, we believe that DOD’s annual recurring savings estimates may be overstated, largely because 46 percent of this savings is due to questionable military personnel savings. Details for Several Key Recommendations Are Uncertain and Estimates Are Likely to Change Many details involved in the implementation of several key BRAC recommendations were uncertain when the department submitted its fiscal year 2008 BRAC budget submission to Congress in February 2007; thus, these estimates are likely to continue to change in succeeding BRAC budget submissions. OSD officials told us that some estimates could change as implementation planning progresses and that initial planning for many recommendations was very difficult but they wanted to provide Congress with the best budget data available at the time of the budget submission. However, until DOD resolves implementation details surrounding its BRAC recommendations, it will continue to have difficulty in more precisely estimating costs and savings and the resolution of these details could cause the department’s cost and savings estimates to change. For example: Realigning Walter Reed Army Medical Center, Washington, D.C. Multiple groups reviewed current and future medical care for wounded soldiers, and DOD officials told us that cost estimates in DOD’s next BRAC budget submission to Congress could change pending the outcomes of these various review groups. OSD officials told us implementation costs will likely increase from the reported $1.7 billion estimate if the time frame to complete the recommendation is accelerated, as recommended by OSD’s independent panel to review current rehabilitative care at Walter Reed. Co-locating miscellaneous OSD, defense agency, and field activity leased locations to Fort Belvoir, Virginia. The Army had planned to relocate these agencies and activities to Fort Belvoir’s Engineering Proving Ground, but in August 2007 the Army announced it is considering a nearby location currently belonging to the U.S. General Services Administration in Springfield, Virginia. Then, in October 2007, the Army announced it is also considering another site in Northern Virginia for relocating about 6,000 personnel. The reported cost estimate of $1.2 billion to implement this recommendation is likely to change depending on the Army’s site location for relocating these OSD offices, defense agencies, and defense field activities. Establishing Army Centers of Excellence at several locations. The Army was not certain about the number of personnel it expected to eliminate as a result of combining several Army schools and centers at the time of the fiscal year 2008 BRAC budget submission to Congress. Based on our analysis, once the Army resolves the implementation details for these recommendations, the combined net annual savings estimate of $332 million is likely to change in the next BRAC budget submission. Realigning Fort Bragg, North Carolina. The decision as to where to relocate on Eglin Air Force Base, Florida, the Army’s 7th Special Forces Group currently located at Fort Bragg remained uncertain as of August 2007. According to officials at Eglin, the planned location of the Special Forces Group could change because of various space and noise issues associated with the installation’s implementation of another BRAC recommendation to establish a joint training site for the Joint Strike Fighter aircraft, also at Eglin Air Force Base. DOD’s estimated $343 million in cost in its fiscal year 2008 BRAC budget submission to Congress would change depending on the final site location for the 7th Special Forces Group at Eglin. Establishing joint basing at multiple locations. The services have yet to agree on many of the details involved with this recommendation to create 12 joint bases. According to BRAC implementing officials and recent testimony before Congress, it is still uncertain what the organizational and personnel requirements will be for these joint bases, thus making it difficult to provide a realistic estimate on the costs or savings from implementing this recommendation. DOD is currently estimating net savings of $116 million annually. Realigning medical enlisted training at Fort Sam Houston, Texas. Part of this recommendation required the services to co-locate their medical training to one location with the potential of transitioning to a joint training effort. Fort Sam Houston officials told us that the expected savings from this recommendation were anticipated based on a joint training effort. However, BRAC implementing officials told us the services had not yet agreed on the final joint curriculum when the fiscal year 2008 BRAC budget submission was provided to Congress; thus the number of instructors needed and several other details remained uncertain. These officials told us that once these details become final, the amount of expected net savings, which DOD estimated to be about $91 million annually, could change for this recommendation. Creating a Naval Integrated Weapons and Armaments Research, Development and Acquisition, Test and Evaluation Center mostly at Naval Air Weapons Station China Lake, California. Navy officials told us they were uncertain how many personnel associated with a testing range mission will realign as they plan for the implementation of this recommendation. Moreover, the DOD Inspector General recently reported that the Navy did not adequately document the number of personnel expected to realign in this recommendation’s proposed business plan, citing that the number of personnel to move has ranged from about 1,660 to nearly 650. Until OSD resolves implementation details surrounding this recommendation, it will continue to have difficulty in more precisely estimating the associated costs and savings. DOD estimated it will cost about $427 million to implement this recommendation as presented in the fiscal year 2008 BRAC budget submission and OSD estimated it will accrue a net recurring savings of $68 million annually after 2011. Co-locating medical command headquarters. Various BRAC implementing officials associated with planning the implementation for this recommendation told us that depending on the still undecided final site location and the number of personnel to relocate, the $50 million in estimated costs to implement this recommendation could likely change. These recommendations illustrate the evolving nature of implementation planning and the likelihood that the associated cost and savings estimates could likely change. They are not the only recommendations which may experience changes in costs or savings; however, they are some of the recommendations from which DOD expects to incur the most costs and savings relative to other BRAC 2005 recommendations. Thus, changes to cost and savings estimates related to these recommendations will have a larger effect on the overall BRAC implementation estimates. Military Construction Costs Could Increase Military construction costs could increase due to various economic pressures and if the Army’s new initiatives designed to reduce construction costs do not achieve the planned results. DOD’s current cost estimates of $31 billion to implement the BRAC recommendations involve about $21 billion in estimated costs for military construction that could likely increase because of greater than expected inflation and the market demand for new construction. Since the majority of expected BRAC costs are for military construction, systemic increases in the cost of construction could have a considerable effect on the total cost to implement BRAC 2005. This change is important because DOD’s estimate of $21 billion in military construction is the single largest cost item associated with implementing BRAC 2005 recommendations and is unprecedented given that DOD spent less than $7 billion for military construction in the four previous BRAC rounds combined. In addition, we recognize that determining costs in construction programs that span years of effort is difficult. As such, DOD told us they will continue to monitor reasons for potential cost growth for BRAC construction contracts. Additionally, BRAC implementing officials expressed concern that construction costs have the potential to increase in areas already experiencing high commercial construction demands such as the National Capital Region, Washington, D.C. and San Antonio, Texas. For example, DOD estimated it could cost about $3.4 billion in construction to implement several recommendations in the National Capital Region, Washington, D.C. (the realignment of Walter Reed Medical Center, the relocation of the National Geospatial-Intelligence Agency, and the realignment to Fort Belvoir due to numerous terminations of DOD-leased space in the Washington, D.C. area). Moreover, DOD estimated it could cost about $1.3 billion in construction to implement the recommendation to establish a new joint medical enlisted training center and relocate Lackland Air Force Base’s medical inpatient care to Fort Sam Houston, San Antonio, Texas. U.S. Army Corps of Engineers (USACE) officials told us they are concerned about what effect construction demand might have on bid proposals given the sizable amount of construction to take place in a limited amount of time to meet the BRAC statutory completion time frame. Additionally, service officials at various installations expressed concern about the potential for increases in construction costs because of ongoing reconstruction due to damage caused by Hurricane Katrina, coupled with the large volume of anticipated BRAC construction that could also affect bid proposals. Similar to the current commercial construction market in general, military construction has been affected by rising costs for construction labor and materials for the last several years. USACE officials told us the actual rate of construction inflation for the last several years has exceeded the federal government’s inflation rate used for budgetary purposes, which is required to be used in budgeting for construction projects. While this difference was as high as 6.1 percentage points in 2004, the difference between the actual rate of construction inflation and the government’s budgetary inflation rate has diminished recently. USACE officials told us that if the extent to which the actual rate of inflation continues to exceed the budgeted rate as implementation proceeds, and if construction material costs are higher than anticipated, they would either have to redirect funding from other sources to provide for construction projects or resort to a reduction in the scope of some construction projects. However, this trend may not necessarily continue into the future depending on the economics surrounding the construction industry. USACE is currently transforming and streamlining its process for managing and contracting for military construction. USACE officials told us that these transformation efforts could help in meeting Army’s expected large volume of military construction as well as costs associated with BRAC and other force structure initiatives such as overseas rebasing and Army modularity. USACE has developed a strategy intended to reduce construction costs by 15 percent and reduce construction time by 30 percent. Through its transformation strategy, USACE intends to change how it executes construction projects by standardizing facility designs and processes, expanding the use of premanufactured building where sections or modules of a building are constructed and transported to a construction site to be assembled, executing military construction as a continuous building program rather than a collection of individual construction projects, and emphasizing commercial rather that government building standards, which would allow contractors greater flexibility to use a wider variety of construction materials to meet construction requirements. The Army has already incorporated a 15 percent reduction into its BRAC construction estimates and has budgeted accordingly. Although USACE officials expressed optimism that these cost savings will be realized, and preliminary results are encouraging, these results are based on recent, limited experience using this new process. Specifically, USACE initiated five construction pilots in 2006, all of which were awarded under its price limit. However, if the cost of construction materials escalates or if there is a shortage of construction labor, especially in locations of high construction volume such as Washington, D.C, and San Antonio, Texas, USACE told us that some of the expected military construction transformation savings could decrease. Given that the Army is expected to incur almost 60 percent of the estimated BRAC construction costs ($12 billion), the impact on overall BRAC costs if the Army is unable to achieve its projected 15 percent savings could be considerable, especially since USACE officials told us the majority of the Army’s BRAC-related construction projects incorporated the 15 percent reduction into their estimates. Environmental Cleanup Costs Are Preliminary and Likely to Increase We reported in January 2007 that DOD’s available data showed that at least $950 million will be needed to complete environmental cleanups underway for known hazards on the military bases scheduled for closure as a result of the BRAC 2005 round. Our prior work has shown that some closures result in more intensive environmental investigations and the uncovering of additional hazardous contaminations, thus resulting in higher cleanup costs than DOD predicted and budgeted. For example, additional hazardous contaminations were found at the former McClellan Air Force Base, California, which was recommended for closure in 1995. The discovery of traces of plutonium during a routine cleanup in 2000 caused cleanup costs to increase by $21 million. However, as certain bases undergo more complete and in-depth environmental assessments, a clearer picture of environmental cleanup costs will likely emerge. Annual Recurring Savings Estimates May be Overstated DOD’s estimated annual recurring savings resulting from base closures and realignments may be overstated by about 46 percent. Currently, DOD calculates total estimated annual recurring savings of about $4 billion. This amount includes $2.17 billion in eliminated overhead expenses such as the costs no longer needed to operate and maintain closed or realigned bases and reductions in civilian salaries, which will free up funds that DOD can then use for other defense priorities. However, DOD’s annual recurring savings estimate also includes $1.85 billion in military personnel entitlements—such as salaries and housing allowances—for military personnel DOD plans to shift to other positions but does not plan to eliminate. While DOD disagrees with us, we do not believe that transferring personnel to other locations produces tangible dollar savings outside the military personnel accounts that DOD can use to fund other defense priorities since these personnel will continue to receive salaries and benefits. We recognize that DOD is trying to transform its infrastructure and the Secretary of Defense’s primary goal for the BRAC 2005 process was military transformation. We also recognize DOD’s position that military personnel reductions allow the department to reapply these personnel to support new capabilities and improve operational efficiencies. Nonetheless, DOD’s inclusion of military personnel entitlements in its estimates of annual recurring savings could generate a false sense that all of its reported savings would generate funds that DOD could apply elsewhere. Because DOD’s BRAC budget submission to Congress does not explain the difference between recurring savings attributable to military personnel entitlements and recurring savings that will make funds available for other defense priorities, DOD’s overall estimated annual recurring savings appear almost twice as large as those which will actually be realized. In addition, our analysis shows that the current percentage of estimated annual recurring savings from military personnel entitlements (46 percent) is considerably higher compared to the last round of BRAC that took place in 1995, in which DOD derived about 5 percent of BRAC annual recurring savings from military personnel entitlements. During the previous four rounds of BRAC that took place between 1988 and 1995, the military was downsizing in personnel strength, yet the average percentage of annual recurring savings DOD derived from military personnel entitlements was 26 percent. We reported in July 2005 that military personnel position eliminations are not a true source of savings since DOD intends to reassign or shift personnel to other positions without reducing military end strength associated with the corresponding BRAC recommendation. Moreover, the BRAC Commission stated in its September 2005 report that DOD’s inclusion of savings from eliminating military personnel positions distorts the actual savings attributable to BRAC recommendations. The service officials we interviewed could not link actual military personnel eliminations directly to implementing a BRAC recommendation, as illustrated in the following: Army officials said its military end strength will not be reduced due to any BRAC recommendations. In fact, the Army plans to increase its active-duty end strength by 65,000 over the next several years. Navy officials said they anticipate reducing the Navy’s end strength by 26,000 active duty military personnel between fiscal years 2006 and 2011. However, they told us they have not linked any of these anticipated reductions to BRAC recommendations. Air Force officials said they are in the process of reducing the service’s active-duty end strength by about 40,000. However, Air Force officials said that they cannot link any reductions in military end strength to implementing their BRAC recommendations and the personnel drawdown is independent of BRAC. DOD policy and Office of Management and Budget’s guidance require that an economic analysis be explicit about the underlying assumptions used to estimate future costs and benefits, which we believe includes estimating BRAC savings. If the savings we question were omitted from DOD’s savings estimates, net annual recurring savings would decrease by about 46 percent. As a result, DOD’s BRAC budget submission does not provide enough information to allow Congress full oversight of the savings that can be applied to other programs outside of the military personnel account. Greater transparency over the assumptions behind DOD’s BRAC savings estimates would help to promote independent analysis and review and facilitate congressional decision making related to the multibillion-dollar BRAC implementation program. In addition to taking issue with how DOD characterizes military personnel savings, we also disagree with DOD claiming savings for closing a base that is actually going to stay open. At the time of DOD’s fiscal year 2008 BRAC budget submission to Congress, DOD claimed about $260 million in annual recurring savings for closing Cannon Air Force Base, New Mexico, which is now going to remain open. Although DOD recommended closing Cannon in May 2005 as a proposed recommendation, the BRAC Commission modified the proposed closure, and stated in its September 2005 report to the President that Cannon could remain open if the Secretary of Defense identified a new mission for the base and relocated the base’s fighter wing elsewhere. Subsequently, the Air Force announced in June 2006 that Cannon would remain open and the 16th Special Operations Wing, currently located at Hurlburt Field, Florida, would relocate to Cannon. Nevertheless, DOD still claimed about $200 million in annual savings for military personnel entitlements and about $60 million in annual savings for categories such as base operation and maintenance in its fiscal year 2008 BRAC budget. Officials at the Air Force BRAC office told us that they claimed these annual savings because they disestablished the fighter wing at Cannon, although they said most of the military personnel and aircraft associated with the disestablished fighter wing were reassigned or relocated and will continue to operate. Furthermore, we have taken issue with estimated savings for several Air National Guard BRAC recommendations. As we reported in May 2007, the implementation of several Air National Guard recommendations is expected to result in annual recurring costs of $53 million rather than the annual recurring savings of $26 million estimated by the BRAC Commission—a $79 million per year difference that occurred primarily due to language in the BRAC Commission’s report that prevents the Air National Guard from reducing its current end strength in some states. DOD Has Made Progress Implementing BRAC, but Several Challenges Increase Risk That All Recommendations Might Not be Completed by the Statutory Deadline DOD has made progress implementing BRAC 2005, but faces a number of synchronization and coordination challenges related to implementing many BRAC recommendations. These challenges increase DOD’s risk of not meeting the September 2011 statutory deadline. For example, personnel movements involving tens of thousands of personnel must be synchronized with the expenditure of billions of dollars to construct or renovate facilities needed to support them by 2011. The time frames for completing many BRAC recommendations are so closely sequenced and scheduled to be completed in 2011 that any significant changes in personnel movement schedules or construction delays could jeopardize timely completion. Also, some recommendations are dependent on the completion of others, and delays in completing some interrelated actions might cause a domino effect that could jeopardize DOD’s ability to meet the statutory 2011 BRAC deadline. BRAC 2005, unlike prior BRAC rounds, included more joint recommendations involving more than one military component, thus creating challenges in achieving unity of effort among the services and defense agencies. DOD Has Made Progress Implementing BRAC DOD’s implementation of BRAC 2005 has progressed since the recommendations became effective in November 2005. For example, Navy officials reported that they completed implementing 14 BRAC actions involving the closure of Navy reserve centers and recruiting districts. To dedicate resources and facilitate communications to plan for the implementation of hundreds of BRAC actions, the military services and affected defense agencies have their own BRAC program management offices. Over the past 2 years, these offices have begun the planning and design for the $21 billion military construction program necessitated by the most recent BRAC round, including initiating site surveys and environmental assessments needed before military construction projects can begin. OSD realized that the complexity of the BRAC 2005 round required it to strategically manage and oversee the entire BRAC 2005 program. During prior BRAC rounds, OSD’s oversight of BRAC implementation was typically limited to adjudicating disagreements among the services over implementation issues, according to OSD BRAC officials. However, for this BRAC round, the Principal Deputy Under Secretary of Defense for Acquisition, Technology and Logistics stated in 2005 that the large number of transformational recommendations, particularly recommendations to promote joint facility operations, would present OSD with significant implementation challenges. To meet these challenges, the department initiated a process to develop business plans that laid out the requisite actions, timing of those actions, and the costs and savings associated with implementing each recommendation. Additionally, OSD recognized that the development of business plans would serve as the foundation for the complex program management necessary to implement the BRAC 2005 recommendations. As such, the primary implementation activity of the military services, and defense agencies has been to develop about 240 business plans for OSD review and approval. According to OSD, these business plans have been used as the primary vehicle to delineate resource requirements and generate military construction requirements. As of October 2007, OSD has approved about 220 business plans. Some business plans remain in draft and have not been approved for various reasons. According to OSD, these business plans involve complex issues associated with the services’ lines of authority and sizeable personnel realignments that OSD BRAC officials told us they intend to resolve soon. However, OSD has deferred the approval of about 15 business plans pending the development of broader policies to facilitate the implementation of the recommendations associated with joint basing and chemical demilitarization. Finally, officials in OSD’s BRAC Office told us they plan to continue reviewing business plans as part of their comprehensive, centrally managed oversight of the BRAC program. Recognizing that business plans provide important implementation details, in June 2007 OSD directed the services and defense agencies to update these business plans twice a year in conjunction with OSD program reviews. Challenges in Synchronizing Many BRAC Actions Could Hinder DOD’s Ability to Complete Recommendations within the Statutory Time Frame The department faces a number of challenges related to synchronizing the completion of many BRAC recommendations in order to meet the statutory 2011 time frame. For example, personnel movements involving tens of thousands of military and civilian personnel must be synchronized with billions of dollars worth of construction or renovation activities needed to ensure they have the necessary facilities to support them. Also, the implementation of some recommendations is dependent on the completion of other recommendations before facilities can be renovated for new uses, and some DOD installations are affected by more than six separate recommendations. Delays in synchronizing and completing these interrelated actions could cause a domino effect that might jeopardize DOD’s ability to meet the statutory 2011 BRAC deadline. Also, synchronizing the implementation of several force structure initiatives could further complicate DOD’s BRAC implementation efforts. DOD Must Synchronize Personnel Movements with Construction Time Frames Implementation challenges primarily stem from the complexity of synchronizing the realignment of over 123,000 personnel with the completion of over $21 billion in new construction or renovation projects. According to DOD officials, construction schedules are often the primary driver in setting BRAC implementation timelines due to the amount of time needed to design and build new facilities or renovate existing facilities. The time frames for completing many BRAC recommendations are closely sequenced and scheduled to be completed in 2011 but any significant changes in personnel movement schedules or construction delays could jeopardize DOD’s ability to meet the statutory 2011 BRAC deadline. According to OSD’s approved business plans and DOD officials, the following are some BRAC recommendations that could experience synchronization challenges: Realigning Army reserve components, constructing 125 new Armed Forces Reserve Centers, and closing 387 existing reserve component facilities: Army reserve component officials told us they are managing the construction of new Armed Forces Reserve Centers in a compressed time frame. The data in our recently issued report show that 26 percent of the BRAC actions implementing these recommendations will begin in fiscal year 2010, according to the approved business plans. This approach compresses the amount of time available to construct the facilities and respond to any construction delays that might arise, which increases the risk that the projects might not be completed in time to meet the BRAC statutory completion deadline. On the other hand, Army officials told us that they would assume less risk because many of these projects are small and can be completed within shorter time frames compared to larger projects. For example, the Army considered starting construction on the Armed Forces Reserve Centers toward the beginning of the implementation period and closing older reserve facilities. Instead, more complex and costly recommendations became a higher priority and reserve center actions were delayed. Co-locating miscellaneous OSD, defense agency, and field activity leased locations at Fort Belvoir, Virginia: OSD officials told us that these activities have scheduled the arrival of over 6,000 personnel by September 1, 2011—2 weeks before the BRAC statutory deadline—to implement over 30 discrete actions associated with this recommendation. In addition, recent developments could affect the timing of this realignment to Fort Belvoir because, at the time of our review, the Army was revising its implementation planning to accommodate the possibility of using nearby land owned by the U.S. General Services Administration or another location in Northern Virginia, which will require additional studies to determine environmental impacts and transportation requirements at the new location, according to Fort Belvoir officials. If the process of identifying alternative site locations results in delaying the movement of miscellaneous OSD offices, defense agencies, and field offices, this could jeopardize meeting the statutory deadline. Realigning the National Geospatial-Intelligence Agency to Fort Belvoir, Virginia: The fiscal year 2008 BRAC budget submission shows that construction is expected to be completed by June 2011, which allows 3 months before the statutory deadline to move its missions. To mitigate mission impact and the risk of not completing these moves if construction is delayed, the agency plans to begin moving its personnel in phases starting in April 2010. Realigning Walter Reed Army Medical Center, Washington, D.C., to the National Naval Medical Center, Maryland, and Fort Belvoir, Virginia: Completion is scheduled by September 2011 according to the business plan. The medical joint cross-service group that developed this recommendation in 2005 stated that delays in constructing and occupying the buildings could risk the timely completion of this recommendation and concluded that aggressive actions would be needed to meet the 6-year deadline. Army and OSD officials testified before Congress in January 2007 that the time frame was “very tight” for completing this recommendation. Also, in response to various concerns about the quality of care for warfighters at Walter Reed, an official with the Army’s Surgeon’s General Office told us in September 2007 that certain parts of the recommendation supporting the construction of intensive medical care facilities are expected to be completed sooner than originally planned, while the move to the National Naval Medical Center, Maryland, and Fort Belvoir, Virginia is still scheduled to be completed by September 2011. DOD’s standard construction schedules for medical facilities indicate new hospitals, or additions and renovations to an existing hospital, generally take longer to complete compared to other facilities. Some Recommendations Are Dependent on the Completion of Others In some cases, DOD’s synchronization challenges are exacerbated when the completion of one recommendation is dependent on the completion of another. For example, the BRAC recommendation to close Fort Monmouth, New Jersey, involves relocating personnel from the Army’s Communications-Electronics Life Cycle Management Command currently located at Monmouth to Aberdeen Proving Ground, Maryland. The new facilities at Aberdeen are expected to be renovated by February 2011. However, DOD cannot begin those renovations until the training activity currently occupying the Aberdeen facilities relocates to Fort Lee, Virginia, an action associated with the implementation of another BRAC recommendation. Consequently, the training activity cannot vacate the Aberdeen space until a new facility is built for them at Fort Lee sometime in 2009. This interdependence is shown in figure 4. Likewise, such interdependence could undermine the Navy’s ability to complete within the statutory deadline the recommendation to consolidate various Navy-leased locations onto government-owned property. The business plan that describes the actions and time frames for moving various Navy-leased locations onto government-owned property stated that it will begin renovating space for the move to Arlington, Virginia, in September 2008. However, the current occupant of the space—a component of the Defense Information Systems Agency—is not scheduled to vacate the space the Navy is to move into until June 2011 because the Defense Information Systems Agency component needs to wait until it can move into newly constructed space at Fort Meade, Maryland—an action associated with another BRAC recommendation. Although both DOD components are working on a solution, the business plans for these two recommendations stated several options in order to meet the 2011 BRAC deadline, such as having the Navy occupy “portable facilities,” build a new facility, or explore other workarounds to meet the statutory time frame. Some Installations Affected by Multiple Recommendations Another factor that could threaten the timely completion of some of the BRAC recommendations is the number of DOD installations that are affected by more than one recommendation. Based on BRAC Commission data, 27 installations are affected by six or more BRAC recommendations that include installations such as Fort Belvoir, Virginia; Fort Sam Houston, Texas; Lackland Air Force Base, Texas; Wright-Patterson Air Force Base, Ohio; Naval Station Norfolk, Virginia; Aberdeen Proving Ground, Maryland; and Redstone Arsenal, Alabama. In addition to their routine duties for facility management, installation officials are responsible for synchronizing and coordinating the movements of personnel with the availability of facilities. The following are examples of installations affected by multiple recommendations: Fort Belvoir, Virginia: Officials responsible for implementing the BRAC actions associated with 14 separate recommendations told us that they need to synchronize the availability of various facilities to accommodate the increase of nearly 24,000 personnel expected to arrive, primarily as a result of BRAC recommendations resulting in the closure or realignment of numerous DOD agencies and activities. These officials said that they have concerns about meeting the overall time frame because their plans do not allow for any delays in construction projects or funding. Fort Belvoir officials told us they are encountering challenges when planning the synchronization of the large volume of construction and personnel movement throughout the implementation period. For example, the Army initially planned to site the implementation of 2 recommendations (realigning the National Geospatial-Intelligence Agency and co-locating miscellaneous OSD, defense agency, and field activity leased locations) at Fort Belvoir that would have an unfavorable impact on the surrounding community due to increased traffic congestion. Though Fort Belvoir in October 2007 announced new plans to obtain property near Fort Belvoir that might lessen traffic congestion for the move of miscellaneous OSD, defense agency, and field activity leased locations, Fort Belvoir officials told us that these plans could raise new implementation challenges to meet the statutory deadline because of additional time needed for environmental impact studies, planning and design of new construction, and demolition of existing structures at the new proposed site. Fort Sam Houston, Texas: Installation officials at Fort Sam Houston told us that they have to synchronize numerous actions involving eight separate BRAC recommendations and have concerns about coordinating the availability of facilities—either to be constructed or renovated—with the planned net increase of over 10,000 personnel. Furthermore, officials told us the lack of guidance on how installation officials will establish a joint base with nearby Lackland and Randolph Air Force Bases, Texas, in accordance with the BRAC recommendation on joint basing exacerbates the uncertainty in planning for the implementation of these recommendations. Force Structure Initiatives Further Complicate DOD’s BRAC Implementation Efforts Two Army force restructuring initiatives—modularity and overseas rebasing strategy—could exacerbate the Army’s BRAC synchronization challenges. The Army considers modularity to be the most extensive reorganization of its force since World War II, in which it restructures itself from a division-based force to a more agile and responsive modular brigade-based force. According to Army estimates, this initiative will require a significant investment through fiscal year 2011. DOD’s Global Defense Posture Realignment Plan, also known as overseas rebasing, will result in a global realignment of U.S. forces and installations, including the planned transfer to American territory of up to 70,000 defense personnel and about 100,000 family members and civilian employees currently living overseas. As a result of mostly these force structure initiatives and BRAC, the Army plans to relocate over 150,000 soldiers and civilian personnel by fiscal year 2012, representing over 20 percent of the Army’s total projected active-duty and civilian personnel end strength. To illustrate, Army installations that expect personnel increases of greater than 5,000 over the next 5 years, as of March 2007, are shown in table 2. As shown in table 2, some installations are expecting substantial growth; Forts Belvoir, Bliss, Riley, and Lee each anticipate net personnel gains of more than 50 percent. For example, the Army plans to relocate at Fort Bliss, Texas, about 18,000 personnel as part of BRAC, the transformation of Army modular brigade units, and DOD’s overseas rebasing efforts. The Army is planning 54 new construction projects over the 6-year BRAC implementation period to accommodate the increase in base population at Fort Bliss. Also, some of the installations listed in table 2 may experience more growth in the next several years depending on whether the Army’s active end strength is increased by 65,000 soldiers. Coordination Among Multiple Services and Agencies Presents Additional Challenges BRAC 2005, unlike prior BRAC rounds, included more joint recommendations involving more than one military component, thus creating challenges in achieving unity of effort among the services and defense agencies. According to our analysis, 43 percent of the 240 OSD- required business plans involved formal coordination between at least two services or agencies. Service officials said that gaining consensus among military services and defense agencies has been challenging in the areas of personnel and facility requirements, implementation schedules, and funding responsibilities. For example, officials told us it was a challenge due to the joint nature in planning for the implementation of the recommendation to realign Fort Bragg, North Carolina, by relocating Army’s 7th Special Forces Group to Eglin Air Force Base, Florida. Service officials told us it took time for the Army and Air Force to coordinate how to share base operations costs given these two services have different standards for calculating these costs. Similarly, regarding the recommendation to establish the Joint Strike Fighter initial joint training site at Eglin Air Force Base, Florida, it took time for the Navy, Marine Corps, and Air Force to agree on cost-sharing arrangements and a joint training curriculum designed to achieve savings from consolidated training on the aircraft. Likewise, other complex joint cross-service recommendations could be slowed by a similar need to coordinate and negotiate agreements. The following are some BRAC recommendations with unresolved coordination challenges. Create joint bases involving multiple defense installations: The 26 defense installations involved with creating 12 new joint bases required DOD to define the governance structure over how these joint bases should be organized, the associated chain of command authority, and the operational concepts for managing these joint bases. According to service officials, some of their most challenging issues to resolve include 1) transferring real property and budget authority to the lead service, 2) determining standard levels of base operating support and which base functions to transfer to the lead service, 3) deciding whether civilian personnel on a joint base will become employees of the lead service, 4) agreeing on common terminology and standards, and 5) funding contributions from each service. These challenges to establishing joint bases have been problematic since each service has its own concept of how installations should be managed and organized. In particular, during recent congressional testimony, the Air Force expressed views on joint basing concepts contrary to those of OSD and the other services. To overcome these challenges, OSD formed a special working group to resolve these issues and OSD officials told us they would approve the joint basing business plan when more of the planning details have been resolved. Realign supply, storage, and distribution management at multiple locations: There are several potential issues between the Defense Logistics Agency and the military services that may affect the planned implementation of the recommendation. While baseline agreements have been reached between the Defense Logistics Agency and the services on the transfer of supply-related personnel positions and related inventories to the Defense Logistics Agency, some important aspects of the implementation plans are incomplete and still need to be resolved. For example, performance-based agreements that will establish responsibilities, metrics to measure performance, costs, and business rules between the Defense Logistics Agency and the services have yet to be negotiated and agreed upon. Additionally, the funding and decision- making process for future maintenance, upgrades, usage, and integration of information technology systems transferring to Defense Logistics Agency has not been agreed to. Lastly, due to the way the Defense Logistics Agency plans to implement the recommendation by staging the personnel transfers over time by each military service, it plans to apply lessons learned to resolve issues as implementation proceeds. We also reviewed a separate BRAC action, which is part of this recommendation, in more detail and issued our report in October 2007. Co-locate medical command headquarters: The affected agencies have had challenges in reaching agreement on where to co-locate these medical commands. Specifically, the Air Force and OSD Health Affairs have disagreed with the business manager on associated cost and implementation time frames. As such, OSD has not yet approved the business plan for this recommendation. As a result of these coordination challenges, the planning process has lengthened beyond that which DOD officials initially expected, which could result in delayed implementation of certain recommendations. The need for gaining consensus about planning and implementation details among the services and defense agencies could continue throughout the BRAC implementation period. At the same time, DOD believes the review process helps to ensure that BRAC actions meet the intent of the law, are accurate, and effectively coordinated. However, if gaining consensus among these entities continues to be a challenge or if new organizations established under BRAC continue to lack fully developed operational concepts and organizational structures, it may become increasingly difficult to implement these recommendations before the statutory 2011 deadline. Conclusion DOD recognizes that its BRAC recommendations and its implementation are of high public interest. As such, it is paramount that DOD communicates openly about the expected savings that could result from the implementation of BRAC actions. As long as DOD continues to assert that nearly half of its estimated $4 billion in annual recurring BRAC savings come from military personnel reassignments, which will not free up funds for other defense priorities, DOD could create a false sense that BRAC 2005 will result in a much higher dollar savings than will actually be realized to readily fund other priorities. Without explaining the difference between annual recurring savings attributable to military personnel reassignments and annual recurring savings that will make funds available for other defense priorities, DOD could lessen the credibility of the BRAC program and decrease the public’s trust in the BRAC process. Greater transparency over the source of expected BRAC savings could help to preserve public confidence in the integrity of the BRAC program. Recommendation for Executive Action To provide more transparency over DOD’s estimated annual recurring savings from BRAC implementation, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in consultation with the Office of the Under Secretary of Defense (Comptroller), to explain, in DOD’s BRAC budget submission to Congress, the difference between annual recurring savings attributable to military personnel entitlements and annual recurring savings that will readily result in funds available for other defense priorities. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD concurred with our recommendation and agreed to include an explanation of the annual recurring savings in its BRAC budget justification material that accompanies the annual President’s budget. DOD also noted in its comments to us that military personnel reductions attributable to a BRAC recommendation as savings are as real as savings generated through end strength reductions. DOD also stated that while it may not reduce overall end strength, its reductions in military personnel for each recommendation at a specific location are real and these personnel reductions allow the department to reapply these military personnel to support new capabilities and improve operational efficiencies. While we recognize these benefits from reapplying freed up military personnel to other locations due to implementing BRAC recommendations, we do question that nearly half of DOD’s annual recurring savings estimate of $4 billion includes military personnel entitlements—such as salaries and housing allowances—for military personnel DOD plans to shift to other positions but does not plan to eliminate thus requiring DOD to continue paying the salaries and benefits. While DOD disagrees with us, we do not believe that shifting or transferring personnel to other locations produces tangible dollar savings outside the military personnel accounts that DOD can use to fund other defense priorities since these personnel will continue to receive salaries and benefits. DOD did acknowledge however, that these savings may not be available to fund other defense priorities because they have already been spent to fund military personnel priorities. It is also worth noting that DOD commented that although its net annual recurring savings estimates have decreased from $4.2 billion to $4 billion, these savings still represent a significant benefit that will result from the implementation of BRAC recommendations. DOD’s written comments are reprinted in appendix VII. DOD also provided technical comments, which we have incorporated into this report as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of Defense; the Secretaries of the Army, Navy, and Air Force; Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me on (202) 512-4523 or by e-mail at leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff that made major contributions to this report are listed in appendix VIII. Appendix I: Scope and Methodology We reviewed the Defense Base Closure and Realignment Commission’s 182 recommendations to realign and close military bases, but mostly focused our work on the recommendations that changed the most in expected costs and savings compared to the Commission’s estimates. Recognizing that the Department of Defense (DOD) was in the process of initial planning for base realignment and closure (BRAC) implementation, and the associated financial data were changed frequently during our review, we compared BRAC cost and savings estimates primarily using two key publicly available documents—the 2005 BRAC Commission report to the President released in September 2005 and DOD’s latest BRAC budget submission provided to Congress in February 2007. We used data from the BRAC Commission report to the President because the estimates contained in this report were the closest estimates available associated with the final and approved BRAC recommendations. We used DOD’s most recent BRAC budget submission because it was the most authoritative information publicly available for making broad comparisons of BRAC cost and savings estimates. Specifically, we compared the change in cost estimates as well as the estimates for net annual recurring savings that DOD expects to incur after BRAC implementation and noted those recommendations that have increased the most in expected costs and decreased the most in expected savings. In addition, we used the BRAC Commission’s data generated from DOD’s estimation model, known as the Cost of Base Realignment Actions, to determine changes in expected one- time costs, to include military construction cost estimates and inflation. We generally reported costs and savings in current dollars and not constant dollars except where noted. To calculate DOD’s estimate of net annual recurring savings, we used OSD’s data provided to us for estimated savings in fiscal year 2012—the year after OSD expects all recommendations to be completed—because these data more fully captured these savings and allowed us to replicate the same methodology used by the BRAC Commission in its calculation of this estimate. We used OSD’s fiscal year 2012 data and subtracted the estimates for annual recurring costs from the estimates for annual recurring savings, which is the same method both DOD and we have used for prior BRAC rounds. To determine expected 20-year savings—also known as the 20-year net present value—we used the same formulas and assumptions as DOD and the BRAC Commission used to calculate these savings. Specifically, we used DOD’s BRAC fiscal year 2008 budget data for expected costs and savings to implement each recommendation for fiscal years 2006 through 2011. We also used data that the BRAC Office in the Office of the Deputy Under Secretary of Defense for Installations and Environment provided us for expected net annual recurring savings after the completion of each recommendation for fiscal years 2012 to 2025. We then converted these data to fiscal year constant 2005 dollars using DOD price indexes to distinguish real changes from changes due to inflation. We used fiscal year 2005 dollars to calculate 20-year savings because the BRAC Commission also used fiscal year 2005 dollars for this calculation.Finally, we calculated how many years it would take for expected BRAC savings to recoup the expected initial investment costs to implement the recommendations, comparing the fiscal years, or break-even points, when cumulative net savings would exceed cumulative one-time costs. We did this to be consistent with the way DOD had reported their break-even points for past BRAC rounds, which is a methodology we also replicated in our prior reports on BRAC implementation. To assess the reliability of DOD’s BRAC cost and savings data, we tested computer-generated data for errors, reviewed relevant documentation, and discussed data quality control procedures with officials at the Office of the Secretary of Defense (OSD) BRAC Office. We determined that the data were sufficiently reliable for the purposes of making broad comparisons between DOD’s reported cost and savings estimates and the BRAC Commission’s reported estimates. To determine why DOD’s estimates changed compared to the BRAC Commission’s estimates, we reviewed over 200 OSD-approved business plans that outlined actions, time frames, and financial estimates for implementing each BRAC recommendation. We also obtained and analyzed information from the U.S. Army Corps of Engineers about its recent initiative to transform how it manages military construction projects and how these new initiatives are expected to reduce military construction costs during BRAC implementation. We did not validate the services’ or defense agencies’ BRAC military construction requirements because DOD’s Office of the Inspector General, the Army Audit Agency, the Naval Audit Service, and the Air Force Audit Agency were reviewing BRAC military construction projects at the time of this report. Their work in this area is expected to continue over the next several years. However, we met with staff of these audit services periodically over the course of our review. Further, we met periodically with officials at the OSD BRAC office and corresponding BRAC implementation offices in the Army, Navy, and Air Force to determine why DOD’s estimates changed compared to the BRAC Commission’s estimates. We also met with these officials to discuss their roles and responsibilities as they began BRAC implementation planning and to obtain their perspectives on any implementation challenges that they encountered. Given the unprecedented number of BRAC 2005 closures and realignments, we focused our analysis on broad issues affecting DOD’s cost and savings estimates and implementation challenges rather than on specific implementation issues of individual recommendations. To obtain the perspective of installation and command officials directly involved in BRAC implementation planning and execution, we visited 17 bases and 8 major commands affected by BRAC. We selected these bases and commands because they were among the closures or realignments that DOD projected to have significant costs or savings, or because we wanted to obtain more information about particular implementation issues. Installations we visited include: Aberdeen Proving Ground, Maryland; Brooks City-Base, Texas; Eglin Air Force Base, Florida; Fort Belvoir, Virginia; Fort Benning, Georgia; Fort Bliss, Texas; Fort Dix, New Jersey; Fort McPherson, Georgia; Fort Monmouth, New Jersey; Fort Monroe, Virginia; Fort Sam Houston, Texas; Lackland Air Force Base, Texas; McGuire Air Force Base, New Jersey; National Naval Medical Center, Maryland; Randolph Air Force Base, Texas; Rock Island Army Arsenal, Illinois; and Walter Reed Army Medical Center, District of Columbia. In addition, we met with officials from eight commands to obtain a command-level perspective about BRAC implementation and because these commands were involved in coordinating the business plans or were responsible for key decisions in implementation planning. Commands visited include the Air Force’s Air Education and Training Command; Army Communications–Electronics Life Cycle Management Command; Army Forces Command; Army Information Systems Engineering Command; Army Medical Command; Army Training and Doctrine Command; Naval Installations Command; and the U.S. Army Corps of Engineers. As we obtained information concerning implementation challenges during interviews, we assessed the reliability of that information by asking similar questions from officials at different military services at the installation and headquarters levels. We conducted our work from November 2005, when the BRAC recommendations became effective, through October 2007 so we could analyze data in DOD’s BRAC budget submission provided to Congress in February 2007. Our work was done in accordance with generally accepted government auditing standards. Appendix II: BRAC Recommendations with the Largest Increases in Estimated Costs Appendix II lists specific base realignment and closure (BRAC) recommendations that have increased the most in estimated one-time costs compared to the BRAC Commission estimates reported in September 2005. Table 3 shows that the Department of Defense’s (DOD) one-time implementation cost estimates have increased by more than $50 million each for 33 recommendations compared to BRAC Commission estimates. Appendix III: BRAC Recommendations with the Largest Decreases in Estimated Net Annual Recurring Savings Appendix III lists specific base realignment and closure (BRAC) recommendations that have decreased the most in estimated net annual recurring savings compared to the BRAC Commission estimates. Table 4 shows that the Department of Defense’s (DOD) net annual recurring savings estimates have decreased by more than $25 million each for 13 recommendations compared to BRAC Commission estimates. Appendix IV: BRAC Recommendations DOD Expects to Cost the Most Appendix IV lists individual base realignment and closure (BRAC) recommendations that the Department of Defense (DOD) expects to cost the most to implement. DOD expects 24 recommendations (13 percent) to generate 65 percent of the one-time costs to implement BRAC recommendations during fiscal years 2006 through September 15, 2011, as shown in table 5. Appendix V: BRAC Recommendations DOD Expects to Save the Most Annually Appendix V lists individual base realignment and closure (BRAC) recommendations that the Department of Defense (DOD) expects to save the most annually after it has implemented the recommendations. DOD expects 28 recommendations (15 percent) to generate 85 percent of the net annual recurring savings as shown in table 6. Appendix VI: BRAC Recommendations DOD Expects to Save the Most Over a 20-Year Period Appendix VI lists individual base realignment and closure (BRAC) recommendations that the Department of Defense (DOD) expects to save the most over a 20-year period. DOD expects the implementation of 29 recommendations (16 percent) to generate 85 percent of the 20-year savings as shown in table 7. Appendix VII: Comments from the Department of Defense Appendix VIII: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the individual named above, Barry Holman, Director (retired); Laura Talbott, Assistant Director; Leigh Caraher; Grace Coleman; Susan Ditto; Thomas Mahalek; Julia Matta; Charles Perdue; Benjamin Thompson; and Tristan T. To made key contributions to this report. Related GAO Products Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Are Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004.
The 2005 Base Realignment and Closure (BRAC) round is the biggest, most complex, and costliest ever. DOD viewed this round as a unique opportunity to reshape its installations, realign forces to meet its needs for the next 20 years, and achieve savings. To realize savings, DOD must first invest billions of dollars in facility construction, renovation, and other up-front expenses to implement the BRAC recommendations. However, recent increases in estimated cost have become a concern to some members of Congress. Under the Comptroller General's authority to conduct evaluations on his own initiative, GAO (1) compared the BRAC Commission's cost and savings estimates to DOD's current estimates, (2) assessed potential for change in DOD's current estimates, and (3) identified broad implementation challenges. GAO compared the BRAC Commission's estimates, which were the closest estimates available associated with final BRAC recommendations, to DOD's current estimates. GAO also visited 25 installations and major commands, and interviewed DOD officials. Since the BRAC Commission issued its cost and savings estimates in 2005, DOD plans to spend more and save less, and it will take longer than expected to recoup up-front costs. Compared to the BRAC Commission's estimates, DOD's cost estimates to implement BRAC recommendations increased from $21 billion to $31 billion (48 percent), and net annual recurring savings estimates decreased from $4.2 billion to $4 billion (5 percent). DOD's one-time cost estimates to implement over 30 of the 182 recommendations have increased more than $50 million each over the BRAC Commission's estimates, and DOD's cost estimates to complete 6 of these recommendations have increased by more than $500 million each. Moreover, GAO's analysis of DOD's current estimates shows that it will take until 2017 for DOD to recoup up-front costs to implement BRAC 2005--4 years longer than the BRAC Commission's estimates show. Similarly, the BRAC Commission estimated that BRAC 2005 implementation would save DOD about $36 billion over a 20-year period ending in 2025, whereas our analysis shows that BRAC implementation is now expected to save about 58 percent less, or about $15 billion. DOD's estimates to implement BRAC recommendations are likely to change further due to uncertainties surrounding implementation details and potential increases in military construction and environmental cleanup costs. Moreover, DOD may have overestimated annual recurring savings by about 46 percent or $1.85 billion. DOD's estimated annual recurring savings of about $4 billion includes $2.17 billion in eliminated overhead expenses, which will free up funds that DOD can then use for other priorities, but it also includes $1.85 billion in military personnel entitlements, such as salaries, for personnel DOD plans to transfer to other locations. While DOD disagrees, GAO does not believe transferring personnel produces tangible dollar savings since these personnel will continue to receive salaries and benefits. Because DOD's BRAC budget does not explain the difference between savings attributable to military personnel entitlements and savings that will make funds available for other uses, DOD is generating a false sense that all of its reported savings could be used to fund other defense priorities. DOD has made progress in planning for BRAC 2005 implementation, but several complex challenges to the implementation of those plans increase the risk that DOD might not meet the statutory September 2011 deadline. DOD faces a number of challenges to synchronize the realignment of over 123,000 personnel with the completion of over $21 billion in new construction or renovation projects by 2011. For example, the time frames for completing many BRAC recommendations are so closely sequenced and scheduled to be completed in 2011 that any significant changes in personnel movement schedules or construction delays could jeopardize DOD's ability to meet the statutory 2011 deadline. Additionally, BRAC 2005, unlike prior BRAC rounds, included more joint recommendations involving more than one military component, thus creating challenges in achieving unity of effort among the services and defense agencies.
More Money and Time Will Be Needed to Complete JSF Development, While DOD Plans to Accelerate Procurement JSF development will cost more and take longer to complete than reported to the Congress in April 2008, primarily because of contract cost overruns and extended time needed to complete flight testing. DOD is also significantly increasing annual procurement rates and plans to buy some aircraft sooner than reported last year. The new plan will require increased annual procurement funding over the next 6 years, but officials did not assess its net effect on total program costs through completion of JSF acquisition. Total development costs are projected to increase between $2.4 billion and $7.4 billion and the schedule for completing system development to be extended from 1 to 3 years, according to estimates made in late 2008—one by the JSF Program Office and one by a joint team of Office of the Secretary of Defense (OSD), Air Force, and Navy officials. Cost overruns on both the aircraft and engine contracts, delays in manufacturing test aircraft, and a need for a longer, more robust flight test program were the primary cost drivers. The joint team’s estimate is higher than the program office’s because it included costs for the alternate engine program directed by the Congress and used more conservative assumptions based on current and legacy aircraft experiences. Table 1 compares these two estimates with the official program of record which was reported to the Congress in April 2008. Although annual budgets and procurement quantities for fiscal year 2011 and out are still being reviewed by defense officials and are not available to us, we expect the JSF program to continue its rapid increase in annual procurement quantities and to buy some aircraft sooner than reported to the Congress in April 2008. At that time, DOD planned to ramp up procurement to reach a maximum of 130 aircraft per year by fiscal year 2015 (U.S. quantities only) and sustain this rate for 8 years. Procurement budget requirements for that plan were projected to be over $12 billion per year during peak production. The new fiscal year 2010 procurement budget requests funding of $6.8 billion for 30 JSF aircraft, a unit cost of $227 million. This budget is substantially lower than both the program office’s and the joint team’s estimates for 2010, in terms of unit costs and overall procurement funding. Last month, the Secretary of Defense announced plans to procure 513 JSF aircraft during the 6-year period, fiscal years 2010 through 2015. This total includes procuring 28 more aircraft during this period than previously planned. This plan does not increase the total aircraft to be procured through completion of the JSF program but would buy these 28 aircraft in earlier years than previously scheduled. By accelerating procurement, DOD hopes to recapitalize tactical air forces sooner and mitigate projected future fighter shortfalls. The additional aircraft represent a scaling back of the proposed JSF procurement plans that we reported on in March 2009. The joint team’s estimate included $420 million for the alternate engine program. DOD’s 2010 budget request did not include this funding. At that time, DOD was proposing to accelerate procurement by 169 aircraft during these same years. That proposal would have required from $22 billion to $33 billion more in total procurement funding over that period, according to the respective estimates of the program office and joint estimating team. We have not yet been provided budgets and annual procurement quantities for fiscal years 2011 and out under the Secretary’s revised plan that would establish the increased funding requirements for the new accelerated plan compared to annual procurement funding requirements under the April 2008 program of record. Appendixes 1 and 2 provide an historical track of cost and schedule estimates. DOD’s Proposal to Cancel the Alternate Engine Program May Bypass Long-term Merits DOD and the Congress have had a continuing debate for several years on the merits of an alternate engine program to provide a second source and competition for engine procurement and life cycle support. The alternate engine program was part of the original JSF acquisition strategy. The department first proposed canceling the alternate engine program in the 2007 budget and has not asked for funding in the budgets since then. The administration does not believe an alternate engine is needed as a hedge against the failure of the main engine program and believes savings from competition would be small. The Congress has added funding each year since 2007 to sustain the alternate engine development, including $465 million for fiscal year 2009. To date, the two contractors have spent over $8 billion on engines development—over $6 billion with the main engine contractor and over $2 billion with the second source contractor. The way forward for the JSF engine acquisition strategy entails one of many critical choices facing DOD today, and underscores the importance of decisions facing the program. As we noted in past testimonies before this committee, the acquisition strategy for the JSF engine must weigh expected costs against potential rewards. In each of the past 2 years we have testified before this committee on the merits of a competitive engine program for the Joint Strike Fighter. While we did not update our analysis we believe it is still relevant and the same conclusions can be drawn. We reported in 2008 that to continue the JSF alternate engine program, an additional investment of about $3.5 billion to $4.5 billion in development and production-related costs, may be required to ensure competition. Our earlier cost analysis suggests that a savings of 9 to 11 percent would recoup that investment. As we reported last year, a competitive strategy has the potential for savings equal to or exceeding that amount across the life cycle of the engine. Prior experience indicates that it is reasonable to assume that competition on the JSF engine program could yield savings of at least that much. As a result, we remain confident that competitive pressures could yield enough savings to offset the costs of competition over the JSF program’s life. However, we recognize that this ultimately will depend on the final approach for the competition, the number of aircraft actually purchased, and the ratio of engines awarded to each contractor. Results from past competitions provide evidence of potential financial and nonfinancial savings that can be derived from engine programs. One relevant case study to consider is the “Great Engine War” of the 1980s— the competition between Pratt & Whitney and General Electric to supply military engines for the F-16 and other fighter aircraft programs. At that time, all engines for the F-14 and F-15 aircraft were being produced on a sole-source basis by Pratt & Whitney, which was criticized for increased procurement and maintenance costs, along with a general lack of responsiveness to government concerns about those programs. For example, safety issues with the single-engine F-16 aircraft were seen as having greater consequences than safety issues with the twin-engine F-14 or F-15 aircraft. To address concerns, the Air Force began to fund the development and testing of an alternate engine to be produced by General Electric; the Air Force also supported the advent of an improved derivative of the Pratt & Whitney engine. Beginning in 1983, the Air Force initiated a competition that Air Force documentation suggests resulted in significant cost savings in the program. In the first 4 years of the competition, when actual costs are compared to the program’s baseline estimate, results included (1) nearly 30 percent cumulative savings for acquisition costs, (2) roughly 16 percent cumulative savings for operations and support costs; and (3) total savings of about 21 percent in overall life cycle costs. The Great Engine War was able to generate significant benefits because competition incentivized contractors to improve designs and reduce costs during production and sustainment. Competitive pressure continues today as the F-15 and F-16 aircraft are still being sold internationally. While other defense competitions resulted in some level of benefits, especially with regard to contractor responsiveness, they did not see the same levels of success absent continued competitive pressures. Similar competition for the JSF engines may also provide benefits that do not result in immediate financial savings, but could result in reduced costs or other positive outcomes over time. Our prior work, along with studies by DOD and others, indicate there are a number of nonfinancial benefits that may result from competition, including better performance, increased reliability, and improved contractor responsiveness. In addition, the long-term effects of the JSF engine program on the global industrial base go far beyond the two competing contractors. DOD and others have performed studies and have widespread concurrence as to these other benefits, including better engine performance, increased reliability, and improved contractor responsiveness. In fact, in 1998 and 2002, DOD program management advisory groups assessed the JSF alternate engine program and found the potential for significant benefits in these and other areas. Table 2 summarizes the benefits determined by those groups. While the benefits highlighted may be more difficult to quantify, they are no less important, and ultimately were strongly considered in recommending continuation of the alternate engine program. These studies concluded that the program would maintain the industrial base for fighter engine technology, enhance readiness, instill contractor incentives for better performance, ensure an operational alternative if the current engine developed problems, and enhance international participation. Another potential benefit of having an alternate engine program, and one also supported by the program advisory groups, is to reduce the risk that a single point systemic failure in the engine design could substantially affect the fighter aircraft fleet. This point is underscored by recent failures of the Pratt & Whitney test program. In August 2007, an engine running at a test facility experienced failures in the low pressure turbine blade and bearing, which resulted in a suspension of all engine test activity. In February 2008, during follow-on testing to prove the root cause of these failures, a blade failure occurred in another engine, resulting in delays to both the Air Force and Marine Corps variant flight test programs. Continued Manufacturing Inefficiencies Will Make it Difficult for the Program to Meet Its Production Schedule Manufacturing of JSF development test aircraft is taking more time, money, and effort than planned. Officials believe that they can work through these problems and deliver the 9 remaining test aircraft by early 2010; however, by that time, DOD may have already ordered as many as 58 production aircraft. Manufacturing inefficiencies and parts shortages continue to delay the completion and delivery of development test aircraft needed for flight testing. The contractor has not yet demonstrated mature manufacturing processes, or an ability to produce aircraft consistently at currently planned annual rates. It has taken steps to improve manufacturing processes, the supplier base, and schedule management; however, given the manufacturing challenges, we believe that DOD’s plan to accelerate procurement in the near term adds considerable risk and will be difficult to achieve. The prime contractor has restructured the JSF manufacturing schedule several times, each time lengthening the schedule to deliver aircraft to the test program. Delays and manufacturing inefficiencies are prime causes of contract cost overruns. The contractor has delivered four development flight test aircraft and projects delivering the remaining nine aircraft in 2009 and early 2010. Problems and delays are largely the residual effects from the late release of engineering drawings, design changes, delays in establishing a supplier base, and parts shortages, which continue to cause delays and force inefficient production line work-arounds where unfinished work is completed out of station. Data provided by the Defense Contract Management Agency and the JSF Program Office show continuing critical parts shortages, out-of-station work, and quality issues. The total projected labor hours to manufacture test aircraft increased by 40 percent just in the past year, as illustrated in figure 1. Performance data for two major cost areas—wing assembly and mate and delivery—indicate even more substantial growth. Figure 2 compares the increased budgeted hours in the 2008 schedule to 2007 estimates. The 2007 schedule assumed a steeper drop in labor hours as more units are produced and manufacturing and worker knowledge increases. The new schedule, based upon actual performance, projects a less steep decline in labor hours, indicating slower learning and lesser gains in worker efficiency. As of June 2008, the planned hours for these two major stations increased by about 90 percent over the June 2007 schedule, which itself had shown an increase from the 2006 schedule. The overlap in the work schedule between manufacturing the wing and mating (connecting) it to the aircraft fuselage has been a major concern for several years because it causes inefficient out-of-station work. The contractor continues to address this concern, but the new schedule indicates that this problem will continue at least through 2009. The prime contractor has taken significant steps to improve schedule management, manufacturing efficiency, and supplier base. Our review found that the prime contractor has good schedule management tools and integrated processes in place. The one area not meeting commercial best practices was the absence of schedule risk analysis that would provide better insight into areas of risk and uncertainty in the schedule. DOD agreed with our March 2009 recommendation and will direct the contractor to perform periodic schedule risk analyses. The prime contractor is also implementing changes designed to address the manufacturing inefficiencies and parts shortages discussed earlier. These include (1) increasing oversight of key subcontractors that are having problems, (2) securing long-term raw material purchase price agreements for both the prime and key subcontractors, and (3) implementing better manufacturing line processes. On this latter point, according to program officials, the prime contractor has taken specific steps to improve wing manufacturing performance—noted above as one of the most troublesome workstations. Defense Contract Management Agency officials noted that the contractor produced the second short take off and landing aircraft variant with less work performed out of station than for the first such aircraft. Also, program office and contractor officials report some alleviation of parts shortages and improvements in quality, but also believe that the effects from previous design delays, parts shortages, and labor inefficiencies will continue to persist over the near term. Use of Cost Contracts for Production Aircraft Elevates the Government’s Financial Risk DOD is procuring a substantial number of JSF aircraft using cost reimbursement contracts. Cost reimbursement contracts place most of the program’s financial risk on the buyer—DOD in this case—who is liable to pay more than budgeted should labor, material, or other incurred costs be more than expected when the contract was signed. Subsequent cost increases, such as the growth in manufacturing labor hours discussed above, are mostly passed on to the Government. Thus far, DOD has procured the first three production lots using cost reimbursement contracts—a total of 28 aircraft and an estimated $6.7 billion to date. JSF officials expect to also procure the fourth lot using cost reimbursement and to transition to fixed-price contracts when appropriate, possibly between lots 5 and 7 (fiscal years 2011 to 2013). It is unclear exactly how and when this will happen, but the expectation is to transition to fixed pricing once the air vehicle has a mature design, has been demonstrated in flight tests, and is producible at established cost targets. Under the April 2008 program of record, DOD was planning to procure as many as 275 aircraft costing an estimated $41.6 billion through fiscal year 2013 using cost reimbursement contracts. The plan to accelerate procurement of 28 aircraft would likely add to the quantities purchased on such contracts. Cost reimbursement contracts provide for payment of allowable incurred costs, to the extent prescribed in the contract. According to the Federal Acquisition Regulation, cost reimbursement contracts are suitable for use only when uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use any type of fixed- price contract. Cost reimbursement contracts for weapon production are considered appropriate when the program lacks sufficient knowledge about system design, manufacturing processes, and testing results to establish firm prices and delivery dates. In contrast, a fixed-price contract provides for a pre-established price, places more of the risk and responsibility for costs on the contractor, and provides more incentive for efficient and economical performance. Procuring large numbers of production aircraft using cost reimbursement contracts reflects that the JSF design, production processes, and costs for labor and material is not yet sufficiently mature and that pricing information is not exact enough for the contractor to assume the risk under a fixed-price contract. We see it as a consequence of the substantial concurrency of development, test, and production built into the JSF schedule. Significant overlap of these activities means that DOD is procuring considerable quantities of operational aircraft while development test aircraft are still on the manufacturing line and where much testing remains to prove aircraft performance and suitability. Establishing a clear and accountable path to ensure that the contractor assumes more of the risk is prudent. Accordingly, we recommended in March 2009 that DOD report to the congressional defense committees by October 2009 explaining costs and risks associated with cost reimbursement contracts for production, the strategy for managing and mitigating risks, and plans for transitioning to fixed price contracts for production. DOD concurred. The former Assistant Secretary of the Air Force for Acquisition agreed with our concerns about significant concurrency and the need to transition to a fixed price environment. In an April 2009 memo, as the Assistant Secretary of the Air Force for Acquisition, she discussed her views on the concurrency of production and development testing as driving risks to the development program. She recommended that the JSF joint program office closely examine manufacturing processes and work to convert cost reimbursement contracts to fixed-price as soon as practical. JSF’s Test Plan Is Improved but Flight Test Program Is Still in Its Infancy After reducing test resources and activities to save money in 2007, the JSF Program Office developed a new test plan in the spring of 2008 that extended the development period by 1 year, better aligned test resources and availability dates, and lessened the overlap between development and operational testing. While improved, the new plan is still aggressive and has little room for error discovery, rework, and recovery from downtime should test assets be grounded or otherwise unavailable. The sheer complexity of the JSF program—with 22.9 million lines of software code, three variants, and multi-mission development— suggests that the aircraft will encounter many unforeseen problems during flight testing requiring additional time in the schedule for rework. Given the complexity of the program, the joint estimating team noted that an additional 2 years beyond the recent 1 year extension may be needed to complete development. The test plan relies heavily on a series of advanced and robust simulation labs and a flying test bed to verify aircraft and subsystem performance. Figure 3 shows that 83 percent of the aircraft’s capabilities are to be verified through labs, the flying test bed, and subject-matter analysis, while only 17 percent of test points are to be verified through flight testing. Program officials argue that their heavy investment in simulation labs will allow early risk reduction, thereby reducing the need for additional flight testing, controlling costs, and meeting the key milestones of the program’s aggressive test plan. However, while the JSF program’s simulation labs appear more prolific, integrated, and capable than the labs used in past aircraft programs, their ability to substitute for flight testing has not yet been demonstrated. Despite an improved test plan, JSF flight testing is still in its infancy. Only about 2 percent of its development flight testing had been completed as of November 2008. Figure 4 shows the expected ramp up in flight testing with most effort occurring in fiscal years 2010 through 2012. Past programs have shown that many problems are not discovered until flight testing. As such, the program is likely to experience considerable cost growth in the future as it steps up its flight testing, discovers new problems, and makes the necessary technical and design corrections. While the program has been able to complete key ground tests and demonstrate basic aircraft flying capabilities, it continues to experience flight testing delays. Most notably, flight testing of full short takeoff and vertical landing capabilities has further been delayed. Flight testing of the carrier variant has also been delayed. Program officials do not believe either of the delays will affect planned initial operational capability dates. In 2009 and early fiscal year 2010, the program plans to begin flight testing 6 development test aircraft, including the first 2 aircraft dedicated to mission system testing. A fully integrated, mission-capable aircraft is not expected to enter flight testing until 2012. Despite the nascency of the flight test program and subsequent flight testing delays, DOD is investing heavily in procuring JSF aircraft. Procuring aircraft before testing successfully demonstrates that the design is mature and that the weapon system will work as intended increases the likelihood of expensive design changes becoming necessary when production is underway. Also, systems already built and fielded may later require substantial modifications, further adding to costs. The uncertain environment as testing progresses is one reason why the prime contractor and DOD are using cost-reimbursable contracts until rather late in procurement. Table 3 depicts planned investments—in both dollars and aircraft—prior to the completion of development flight testing. DOD may procure 273 aircraft at a total estimated cost of $41.9 billion before development flight testing is completed. Table 3 also shows the expected contract types. Concluding Remarks The JSF program is entering its most challenging phase, a crossroads of a sort. Looking forward, the contractor plans to complete work expeditiously to deliver the test assets, significantly step up flight testing, begin verifying mission system capabilities, mature manufacturing processes, and quickly ramp up production of operational aircraft. Challenges are many— continuing cost and schedule pressures; complex, extensive, and unproven software requirements; and a nascent, very aggressive test program with diminished flight test assets. While the program must move forward, we continue to believe that the program’s concurrent development and production of the aircraft is extremely risky. By committing to procure large quantities of the aircraft before testing is complete and manufacturing processes are mature, DOD has significantly increased the risk of further compromising its return on investment—as well as delaying the delivery of critical capabilities to the warfighter. Furthermore, the program’s plan to procure large quantities of the aircraft using cost-reimbursement contracts—where uncertainties in contract performance do not permit costs to be estimated with sufficient accuracy to use a fixed-price contract—places additional financial risk on the government. Until the contractor demonstrates that it can produce aircraft in a timely and efficient manner, DOD cannot fully understand future funding requirements. DOD needs to ensure that the prime contractor can meet expected development and production expectations. At a minimum, the contractor needs to develop a detailed plan demonstrating how it can successfully meet program development and production goals in the near future within cost and schedule parameters. As such, in our March 2009 report, we recommended that Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to report to congressional defense committees explaining the risks associated with using cost-reimbursable contracts as compared to fixed price contracts for JSF’s production quantities, the program’s strategy for managing those risks, and plans for transitioning to fixed-price contracts for production. DOD agreed with our recommendation. With an improved contracting framework and a more reasoned look to the future, the JSF program can more effectively meet DOD and warfighter needs in a constrained budget environment. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information about this statement, please contact Michael J. Sullivan at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement are Ridge Bowman, Bruce Fairbairn, Matt Lea, and Charlie Shivers. Appendix I: Changes in JSF Cost, Quantity, and Delivery Estimates development start) (2004 Replan) Cost Estimates (then-year dollars in billions) Unit Cost Estimates (then-year dollars in millions) Military construction costs have not been fully established and the reporting basis changed over time in these DOD reports. Appendix II: F-35 Joint Strike Fighter Schedule Short Takeoff and Vertical Landing Short Takeoff and Vertical Landing * Aircraft flown in conventional mode. The first test to demonstrated full short takeoff and vertical landing capabilities is scheduled for September 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The F-35 Joint Strike Fighter (JSF) program is the Department of Defense's (DOD's) most costly acquisition, seeking to simultaneously develop, produce, and field three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The total expected U.S. investment is now more than $300 billion to develop and procure 2,456 aircraft over the next 25 years. GAO's most recent report in March of this year discussed increased development costs and schedule estimates, plans to accelerate procurement, manufacturing performance and delays, and development test strategy. A recurring theme in GAO's work has been concern about what GAO believes is undue concurrency of development, test, and production activities and the heightened risks it poses to achieving good cost, schedule, and performance outcomes. This testimony discusses: (1) current JSF cost and schedule estimates; (2) engine development; (3) manufacturing performance; (4) contracting issues for procurement of aircraft; (5) and test plans. This statement draws from GAO's March 2009 report, updated to the extent possible with new budget data and a recently revised procurement profile directed by the Secretary of Defense. JSF development will cost more and take longer to complete than reported to the Congress in April 2008, primarily because of contract cost overruns and extended time needed to complete flight testing. DOD is also significantly increasing annual procurement rates and plans to buy some aircraft sooner than reported last year. Total development costs are projected to increase between $2.4 billion and $7.4 billion and the schedule for completing system development extended from 1 to 3 years. The department has not asked for funding for the alternate engine program in the budgets since 2007 arguing that an alternate engine is not needed as a hedge against the failure of the main engine program and that the savings from competition would be small. Nonetheless, the Congress has added funding each year since then to sustain its development. Our prior analysis indicates that competitive pressures could yield enough savings to offset the costs of competition over the JSF program's life. To date, the two contractors have spent over $8 billion on engine development--over $6 billion with the main engine contractor and over $2 billion with the second source contractor. Manufacturing of development test aircraft is taking more time, money, and effort than planned, but officials believe that they can still deliver the 9 remaining test aircraft by early 2010. The contractor has not yet demonstrated mature manufacturing processes, or an ability to produce at currently planned rates. It has taken steps to improve manufacturing; however, given the manufacturing challenges, DOD's plan to increase procurement in the near term adds considerable risk and will be difficult to achieve. DOD is procuring a substantial number of JSF aircraft using cost reimbursement contracts. Cost reimbursement contracts place most of the risk on the buyer--DOD in this case--who is liable to pay more than budgeted should labor, material, or other incurred costs be more than expected when the contract was signed. JSF flight testing is still in its infancy and continues to experience flight testing delays. Nonetheless, DOD is making substantial investments before flight testing proves that the JSF will perform as expected. DOD may procure 273 aircraft costing an estimated $42 billion before completing flight testing.
Background When EESA was signed on October 3, 2008, the U.S. financial system faced a severe crisis that has rippled throughout the global economy, moving from the U.S. housing market to an array of financial assets and interbank lending. The crisis restricted access to credit and made the financing on which businesses and individuals depend increasingly difficult to obtain. Further tightening of credit exacerbated a global economic slowdown. During the crisis, Congress, the President, federal regulators, and others undertook a number of steps to facilitate financial intermediation by banks and the securities markets. In addition to Treasury’s efforts, policy interventions were led by the Board of Governors of the Federal Reserve System (Federal Reserve) and the Federal Deposit Insurance Corporation. While the banking crisis in the United States no longer presents the same level of systemic concerns as it did in 2008, the economy remains vulnerable, with unemployment higher than in the recent past. Globally, concerns about the stability of European banks and countries, especially Greece, escalated in 2011— demonstrating that problems remain in the global economy and financial markets. TARP Programs and Implementation The passage of EESA resulted in a variety of programs supported with TARP funding. (See table 1.) Some of these programs have begun to unwind.overview of key dates for TARP implementation and the unwinding of some programs. While Many TARP Programs Continue to Wind Down, Others Remain Active TARP programs continue to wind down, and some programs have ended. Treasury has stated its goals for the exit process for many programs, but as we and others have reported, these goals at times conflict. Treasury has stated that when deciding to sell assets and exit TARP programs, it will strive to: protect taxpayer investment and maximize overall investment returns promote the stability of financial markets and the economy by bolster markets’ confidence to increase private capital investment, dispose of the investments as soon as it is practicable. For example, we previously reported that deciding to unwind some of its assistance to GM by participating in an initial public offering (IPO) presented Treasury with a conflict between maximizing taxpayer returns and exiting as soon as practicable. Holding its shares longer could have meant realizing greater gains for the taxpayer, but only if the stock appreciated in value. By participating in GM’s November 2010 IPO, Treasury tried to fulfill both goals, selling almost half of its shares at an early opportunity. Treasury officials stated that they strove to balance these competing goals, but have no strict formula for doing so. Rather, they ultimately relied on the best available information in deciding when to start exiting this program. Moreover, Treasury’s ability to exercise control over the timing of its exit from TARP programs is limited in some cases. For example, Treasury will likely decide when to exit AIG based on market conditions but Treasury has less control over its exit from PPIP because the program’s exit depends on the timing of each public-private investment fund (PPIF) selling its investments. Treasury continues to face this tension in its goals with a number of TARP programs as they continue to unwind. Throughout this section we provide the status of each TARP program that remains open or still holds assets that need to be managed, including when the program will end (or stop acquiring new assets and no longer receive funding) and when Treasury will exit the program (or sell assets it acquired while the program was open). We also provide information on outstanding assets, as applicable—both the book value and the market value—as of September 30, 2011. Also included are the lifetime estimated costs for each program calculated by Treasury. Later in this report we discuss the reasons for recent changes in several of Treasury’s cost estimates between September 2010 and September 2011. Many Programs Continue to Wind Down, and Treasury Faces Trade-offs in Determining When to Exit Financial Strength Will Determine When Remaining CPP Institutions Exit Program While repayments and income from CPP investments have exceeded the original outlays, financial strength will determine when remaining institutions exit the program. As we have reported, Treasury disbursed $204.9 billion to 707 financial institutions nationwide from October 2008 through December 2009.received $208.1 billion in repayment and income from its CPP investments, exceeding the amount originally disbursed by $3.2 billion (see fig. 2). The repayment and income amount included $182.4 billion in repayments of original CPP investments, as well as $11.2 billion in dividends, interest, and fees; $7.6 billion in warrant income; and $6.9 billion in net proceeds in excess of costs. After accounting for writeoffs and realized losses on sales totaling $2.6 billion, CPP had $17.3 billion in As of September 30, 2011, Treasury had outstanding investments as of September 30, 2011. Treasury estimates lifetime income of $13 billion for CPP as of September 30, 2011. See Department of the Treasury, Troubled Asset Relief Program (TARP) Monthly 105(a) Report-September 2011 (Washington, D.C.: Oct. 11, 2011). investments. Another 52 percent, or 165 institutions, exited CPP by exchanging their securities under other federal programs: 28 through CDCI and 137 through the Small Business Lending Fund (see fig. 3).the remaining 8 percent of CPP recipients that exited the program, 13 went into bankruptcy or receivership, 11 had their securities sold by Treasury, and 2 merged with another institution. Also, according to data in a Treasury report, as of September 30, 2011, 390 of the original 707 institutions remained in CPP but accounted for only 8.4 percent of the original investments. Much of the $17.3 billion in outstanding investments was concentrated in a relatively small number of institutions. The largest single outstanding investment was $3.5 billion, and the top four outstanding investments totaled $6.8 billion. The top 25 remaining CPP investments accounted for $11.3 billion. The cumulative number of financial institutions that had missed at least one scheduled dividend or interest payment by the end of the month in which the payments were due rose from 164 as of November 30, 2010, to 226 as of November 30, 2011. Institutions can elect whether to pay dividends and may choose not to pay for a variety of reasons, including decisions that they or their federal and state regulators make to conserve cash and maintain (or increase) capital levels. Institutions are required to pay dividends only if they declare dividends, although unpaid cumulative dividends generally accrue and the institution must pay them before making payments to other types of shareholders, such as holders of common stock. These figures differ from the number of dividend or interest payments outstanding because some institutions made their payments after the end of the reporting month. CPP dividend and interest payments are due on February 15, May 15, August 15, and November 15 of each year, or the first business day subsequent to those dates. The reporting period ends on the last day of the calendar month in which the dividend or interest payment is due. institutions still in the program (see fig. 4).despite reduced program participation, and the proportion of those missing scheduled payments has risen accordingly. The number of institutions missing payments stabilized in recent quarters; however, most of these institutions had repeatedly missed payments. In particular, 119 of the 158 institutions that missed payments in November 2011 had also missed payments in each of the previous three quarters. Moreover, these 158 institutions had missed an average of 4.8 additional previous payments, and only 7 had never missed a previous payment. On July 19, 2011, Treasury announced that it had, for the first time, exercised its right to elect members to the boards of directors of two of the remaining CPP institutions. In considering whether to nominate directors, Treasury said that it would proceed in two steps. First, after an institution misses five dividend or interest payments, Treasury sends OFS staff members to observe board meetings. Second, once an institution has missed six dividend payments, Treasury decides whether to nominate a board member based on a variety of considerations, including what it learns from the board meetings, the institution’s financial condition, the function of its board of directors, and the size of its investment. The financial strength of the participating institutions will largely determine the speed at which institutions repay their investments and exit and the amount of total lifetime income. Institutions will have to demonstrate that they are financially strong enough to repay the CPP investments in order to receive regulatory approval to exit the program. The institutions’ financial strength will also be a primary factor in their decisions to make dividend payments, and institutions that continue to miss payments may also have difficulty exiting CPP. Moreover, dividend rates will increase for remaining institutions beginning in late 2013, up to 9 percent, which may prompt institutions to repay their investments as that dividend increase approaches. If broader interest rates are low, especially approaching the dividend reset, banks could have further incentive to redeem their preferred shares. Treasury will need to balance the goals of protecting taxpayer-supported investments while expeditiously unwinding the program. Treasury officials told us that Treasury’s practice was generally to hold, rather than sell, its CPP investments. As a result, Treasury’s ability to exit the program largely depends on the ability of institutions to repay their investments. However, Treasury officials noted that if warranted, Treasury could change its practice in the future and sell its investments. In an upcoming report, we plan to describe the financial condition of the remaining CPP institutions and compare them with institutions that already exited and those that never participated. Treasury has disbursed $570 million to its 84 CDCI participants, 28 of which had previously participated in CPP (see fig. 5). As we previously reported, CDCI is structured similarly to CPP in that it provides capital to financial institutions by purchasing equity and subordinated debt from them. No additional funds are available through the program, as CDCI’s funding authority expired in September 2010. While no CDFIs have repaid Treasury’s investment as of September 30, 2011, Treasury has thus far received $10 million in dividend payments from CDCI participants. Lastly, Treasury expects CDCI will cost approximately $182 million over its lifetime, almost a third of the $570 million obligated to the program. Officials stated that CDCI has a cost, while CPP is estimated to result in lifetime income, in part because CDCI provides a lower dividend rate that increases the financing costs. CDCI also does not require warrants of participating institutions, which would otherwise offset Treasury’s costs. As with CPP, Treasury must continue to monitor the performance of CDCI participants because their financial strength will affect their ability to repay Treasury and Treasury’s ability to exit the program. As of September 30, 2011, 5 of the 84 CDCI participants had missed at least one dividend or interest payment, according to Treasury. While the continuing weak economy could negatively affect distressed communities and the CDFIs that serve them, the program’s low dividend rates may help participants remain current on payments. When Treasury will exit CDCI is unknown but the dividend rate that program participants pay increases in 2018, which provides an incentive for some borrowers to repay before that rate change occurs. As with CPP, Treasury officials indicated that while Treasury’s current practice is to hold its CDCI investments, that strategy could change and Treasury could opt to sell its CDCI shares. Treasury has received more than $40 billion for its roughly $80 billion AIFP investment, in large part from its participation in GM’s IPO and its exit from Chrysler. In November and December 2010, Treasury received $13.5 billion from its participation in GM’s IPO and $2.1 billion for selling preferred stock in GM. Treasury’s investment in Chrysler ended with the repayment of $5.1 billion in loans in May 2011 and the $560 million in proceeds that Treasury received from the sale of its remaining equity stake to Fiat in July 2011. Treasury received $2.7 billion from its sale of Ally Financial trust preferred securities in March 2011. Treasury’s timing of its exit from GM and Ally Financial—and ultimate return on its investment—will depend on how it balances its competing goals of maximizing taxpayer returns and selling its shares as soon as practicable. As figure 6 shows, all of the $37.3 billion in outstanding AIFP funds is from Treasury’s investments in GM and Ally Financial, including 32 percent of GM’s common stock and 74 percent of Ally Financial’s common stock. The timing of Ally Financial’s IPO will be critical to Treasury’s exit strategy, but Ally Financial’s mortgage liabilities could hamper the company’s efforts to launch an IPO and makes the timing of Treasury’s exit from Ally Financial unknown. On March 31, 2011, Ally Financial filed a registration statement with the Securities and Exchange Commission for a proposed IPO but a date has yet to be announced for the IPO. Additionally, after six straight quarterly profits, including growing asset balances for its auto loan business, the company posted a loss of $210 million in the third quarter of 2011, dropping from a profit of about $270 million in the third quarter of 2010, primarily due to losses in its mortgage business. The company attributed these losses to the negative impact of the mortgage servicing rights valuation, resulting from a decline in interest rates and market volatility. Additionally, Ally Financial has $12 billion in debt coming due in 2012. Treasury officials told us that they continue to monitor market conditions and other factors in determining a divestment strategy for GM, but share prices would have to increase significantly from current levels to fully recoup Treasury’s investment in GM. As we previously reported, GM’s share price would have to increase by more than 60 percent from the IPO share price of $33 to an average of more than $54 for Treasury to fully recoup its investment. shares have traded far below the IPO share price—with shares closing above $33 only twice since March 2011, and as of September 30, 2011, the shares closed at $20.18 (fig. 7). GAO-11-471. Additional reporting on AIFP appears in GAO, Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future, GAO-10-492 (Washington, D.C.: Apr. 6, 2010); Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM, GAO-10-151 (Washington, D.C.: Nov. 2, 2009); and Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date, GAO-09-553 (Washington, D.C.: Apr. 23, 2009). Treasury’s Plans to Sell AIG Shares Are Driven by Market Conditions In September 2008, prior to TARP, AIG received government assistance in the form of a loan from the Federal Reserve Bank of New York (FRBNY). In exchange, AIG provided shares of preferred stock to the AIG Credit Facility Trust created by FRBNY. These preferred shares were later converted to common stock and transferred to Treasury. In addition to this non-TARP support, Treasury provided TARP assistance to AIG in November 2008 by purchasing preferred shares that were also later converted to common stock. In late January 2011, following the recapitalization of AIG, Treasury owned 1.655 billion TARP and non- TARP common shares in AIG. The sale included about 132 million TARP AIG common shares on which Treasury had a realized loss and about 68 million non-TARP AIG common shares on which Treasury had a realized gain. 960 million TARP and 495 million non-TARP shares. (AIG also sold 100 million shares of common stock during this offering.) The costs for underwriting, Treasury’s financial advisors, and Treasury’s legal counsel were paid by, and will continue to be paid by, AIG. Treasury, however, pays the costs for assistance it receives from FRBNY. Based on the September 30, 2011, market price of AIG common stock, in selling all of its AIG common shares, Treasury expects to incur a lifetime cost of $24.3 billion for its TARP shares and receive income of $12.8 billion for its non- TARP shares, giving it a lower than expected net estimated cost of $11.5 billion for assistance to AIG (see fig. 8). AIG originally issued $16 billion of preferred shares in a special purpose vehicle (SPV) called AIA Aurora LLC (or AIA), an SPV created by FRBNY to hold shares of certain portions of AIG’s foreign life insurance businesses. Likewise, AIG issued $9 billion of preferred shares in an SPV called American Life Insurance Company (ALICO) Holdings LLC, which was created to hold AIG’s ALICO holdings. AIG issued the shares to FRBNY in December 2009 in exchange for a $25 billion reduction in FRBNY’s revolving loan to AIG. As part of the recapitalization plan executed on January 14, 2011, AIG redeemed FRBNY’s preferred shares by drawing down the Series F equity facility and selling assets. In turn, FRBNY transferred to Treasury the proceeds, along with a cross collateralization agreement against certain other AIG businesses, held for sale. Since the recapitalization, AIG has used the additional sales proceeds to reduce the remaining liquidation preferences of Treasury’s preferred interests in the AIA and ALICO SPVs. Treasury has not announced any time frames for selling its AIG investments, but as it exits this assistance it needs to balance selling its AIG stock as soon as practicable based on market conditions with protecting taxpayers’ interests. Treasury officials said that the agency would work to avoid economic losses during this exit. To that end, Treasury officials said that the agency had waited to proceed with its first underwritten offering of AIG common stock until (1) it reacquainted the investment community with AIG and (2) AIG executed and closed other transactions, such as the March 2011 sale of MetLife equity securities and a subsequent March transaction that reduced the preferred interests in the AIA SPV by approximately $5.6 billion. The first underwritten offering of Treasury’s AIG common shares occurred in May 2011. Treasury expects to use underwritten offerings to sell most of its common stock in AIG, with assistance from AIG. While Treasury generally prefers to sell the common stock that it holds through underwritten offerings, it could also decide to sell stock through other mechanisms, including more frequent at-the-market offerings. To sell its AIG stock, officials said that the agency planned to regularly conduct analyses, consider market challenges, and rely on AIG to facilitate Treasury’s offerings. Treasury officials have said that they would continue to conduct analyses using factors such as AIG’s share price, investor interest in AIG stock, and possible future restructuring. Treasury officials also expect to face several challenges when disposing AIG stock. First, because Treasury owns a significant amount of AIG stock—both as a percentage of total company stock and in absolute terms—the amount of shares the market can absorb may be limited. Second, continued price volatility in the domestic and global insurance markets could impede growth in these insurance markets. Third, the continued low interest rate environment could likely lead to lower investment incomes and overall profits for AIG, which in turn could affect Treasury’s opportunities to sell its AIG shares. According to Treasury officials, Treasury expects to rely on AIG to prepare and file certain paperwork with the Securities and Exchange Commission and provide other assistance when Treasury sells its remaining AIG shares. Given the decline in AIG’s stock price since January 2011 and the recent volatility in the stock market, when Treasury’s exit will be completed is unknown. Treasury will also need to balance the tension of its competing goals by deciding whether it should exit even if the stock value is below Treasury’s break-even amount. Treasury purchased 31 SBA 7(a) securities between March and September 2010 in an attempt to alleviate liquidity strains in secondary Treasury announced in June 2011 that it markets for SBA 7(a) loans. intended to sell these securities and has sold nearly three-quarters of the portfolio. As of October 2011, Treasury has sold 23 securities. Treasury has eight securities remaining to be sold and projects lifetime income of $3.9 million (see fig. 9). Treasury officials took into account market effects when they considered exiting Treasury’s portfolio of SBA 7(a) securities. For example, Treasury analyzed SBA lending and securitization volumes, which had recovered to precrisis levels. According to Treasury officials, Treasury also consulted with its external advisor, EARNEST Partners, to understand the potential effect of its sales on the markets. According to Treasury officials, EARNEST Partners advised Treasury that its portfolio was small enough not to affect liquidity in the $15 billion market for SBA 7(a) securities. Moreover, the firm advised Treasury that it had received significant market interest in the securities after Treasury announced its intention to sell them. Treasury officials concluded that it was an opportune time to begin selling these securities without negatively affecting markets. Treasury officials stated that they considered several tradeoffs in deciding to sell the securities this year, rather than holding them for longer. Exiting quickly appears to be the main consideration, although Treasury officials stated that they balanced this with promoting financial stability and protecting the taxpayer. To determine what prices are reasonable to accept as it continues to sell these securities, Treasury requested market price estimates from two companies for each security it held and compared that to a break-even price and a reserve price, below which it would require additional approvals to proceed with the sale. While Treasury might have maximized taxpayer returns by holding the securities longer, according to Treasury officials, it faced prepayment risk that could have reduced the securities’ long-term earning potential. TALF provided loans to certain institutions and business entities in return for collateral in the form of securities that are forfeited if the loans are not repaid. Securitization is a process by which similar debt instruments—such as loans, leases, or receivables—are aggregated into pools, and interest-bearing securities backed by such pools are then sold to investors. These asset-backed securities (ABS) provide a source of liquidity for consumers and small businesses because financial institutions can take assets that they would otherwise hold on their balance sheets, sell them as securities, and use the proceeds to originate new loans, among other purposes. Commercial mortgage-backed securities (CMBS) are securitizations with cash flows backed by principal and interest payments on a pool of loans on commercial properties. For additional information about securitization and about TALF see GAO, Federal Reserve System: Opportunities Exist to Strengthen Policies and Processes for Managing Emergency Assistance, GAO-11-696 (Washington, D.C.: July 21, 2011), and Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility, GAO-10-25 (Washington, D.C.: Feb. 5, 2010). collateral.costs related to the TALF SPV, TALF LLC (see fig. 10). This SPV receives a portion of the interest income earned on TALF loans that can be used to purchase any borrower-surrendered collateral from FRBNY, referred to as excess interest. Treasury Continues to Address Staffing Needs While Also Relying on Financial Agents and Contractors to Support TARP Administration and Programs OFS Staffing Declined Slightly for the First Time and Treasury Is Addressing Turnover-Related Staffing Issues As we have identified in previous reports, Treasury still faces staffing challenges, including recent turnover stemming from the departure of term-appointed staff, but it has been addressing these challenges. Overall staffing numbers steadily increased from 2008 through 2010 but began declining for the first time in 2011 (see fig. 14). Also, as we previously reported in September 2011, OFS no longer has detailees from other federal agencies. When OFS was first organized, it relied on a significant number of staff from other agencies to start up new TARP programs. With most TARP programs winding down, OFS officials stated that OFS has begun to detail OFS staff to other Treasury programs, such as the Small Business Lending Fund (SBLF), and other federal agencies, such as the Bureau of Consumer Financial Protection. From September 2010 through September 2011, about 65 staff left OFS, according to Treasury officials. As overall staffing numbers have declined, staffing levels within individual OFS offices have fluctuated depending on staffing needs. In some offices, for instance, staff levels have decreased. For example, in the Chief Investment Office—which includes staff working on various TARP programs, such as CPP—more than half of the staff departed from June 2010 to September 2011 (a decrease of 20 staff from 2010). Though some Chief Investment Office staff were replaced with staff in other OFS offices and staff that were new to Treasury, many were not replaced because their skill sets were no longer needed given the wind-down phase of investment programs. Conversely, staff have increased in certain OFS offices where OFS management had identified specific needs. For example, the number of staff in the Office of Internal Review (OIR), which identifies risks and develops procedures for complying with EESA, increased from June 2010 to September 2011. Treasury had been seeking new staff with the skill set needed for this work, as we previously reported, and officials stated that the increase reflected a need to continue monitoring compliance among Treasury financial agents and contractors. Treasury filled these positions in part by streamlining the hiring process and better targeting its job announcements. Treasury officials anticipate that staffing levels in most OFS offices will decrease over time, though it will continue to seek talent for OIR, the Chief Financial Office, and the TARP housing programs that remain active. In addition to changes in staff numbers and office composition, OFS has had a number of its leadership team depart since 2010. As we previously reported, the Assistant Secretary of Financial Stability resigned on September 30, 2010. His replacement is OFS’s former Chief Counsel, who was sworn in as Assistant Secretary in July 2011. An acting Chief Counsel has assumed the Assistant Secretary’s former role. Other staff in leadership positions have resigned since we last reported in January 2011. The Chief Investment Officer and the Chief of Operations both left OFS and were replaced internally by OFS staff members. Both of these departing staff were in 3-year term senior executive service positions that were set to expire, according to Treasury officials. The Chief of Operations position is now held by a permanent staff member in an acting capacity, while the Chief Investment Officer position remains a term position. Program leadership has also changed for Treasury’s first and largest program, CPP. Its director left Treasury in 2011 and was replaced with another staff member from the Chief Investment Office. Though OFS has experienced staff turnover and still faces staffing challenges, OFS has been addressing these and other staffing issues. For example: As we previously reported, we recommended that OFS finalize its staffing plan. Treasury has implemented this recommendation, which should help OFS better ensure that it recognizes and addresses its staffing challenges, given that many staff still remain in term appointments. As a result of this plan, OFS produced information on critical positions that should remain or be filled and successors for all of the chiefs and those in critical management positions directly below the chief level. OFS also plans to conduct succession planning for other staff below the management level. OFS now hires predominantly term-appointed staff for a maximum of 2 years, according to Treasury officials. Previously, it hired staff for “permanent” positions as well as term-appointed positions with a maximum of 4 years. Treasury officials noted that they made this change in recognition of the fact that most of its programs are winding down. Additionally, limiting new hires to shorter-term appointments reduces the number of staff that Treasury will need to absorb when OFS closes. OFS has also filled or removed a number of vacancies to recognize that it is in a period of winding down. Specifically, OFS vacancies decreased from 61 in 2010 to 29 as of September 30, 2011. In addition, OFS continues to address employee morale concerns. As we previously reported, an employee survey in 2010 identified communication and staff development as two areas for improvement. According to Treasury officials, OFS took steps to address communication concerns through a monthly newsletter; “lunch and learn” sessions on a variety of topics; and briefings attended by senior Treasury officials, such as the Secretary of the Treasury. To address concerns about staff development, OFS officials said that they increased training offerings and provided the opportunity to complete professional development plans. Treasury has also been assisting term-appointed staff. For example, Treasury officials stated that they have continued to provide information sessions for those staff on term appointments that are seeking permanent positions in the federal government. Officials also noted that they have briefings on helping staff in term appointments understand the terms of the appointment and to find opportunities for detail positions to other agencies. Treasury Increased Its Use of Financial Agents and Contractors Treasury continues to rely heavily on financial agents to support TARP programs. According to OFS procedures, financial agency agreements are used for services that cannot be provided with existing Treasury or contractor resources and generally involve inherently governmental functions. Since the start of TARP, Treasury has relied on financial agents for asset management, transaction structuring, disposition services, custodial services, and administration and compliance support for the TARP housing assistance programs. Through fiscal year 2011, Treasury awarded 17 financial agency agreements, of which 14 remain active. As shown in table 2, the total obligated value of financial agency agreements increased from about $327 million to about $547 million, or 67 percent, from the end of fiscal year 2010 to the end of fiscal year 2011. Treasury awarded two new financial agency agreements in fiscal year 2011 for transaction structuring and disposition services. As shown in table 3, five financial agency agreements accounted for 87 percent of the total obligated value through fiscal year 2011—about $476 million out of about $547 million. The vast majority of these obligations, approximately $383 million, went to Fannie Mae and Freddie Mac, which provide administrative and compliance services, respectively, for HAMP. Congress established Fannie Mae and Freddie Mac as for-profit, shareholder-owned corporations to stabilize and assist the U.S. secondary mortgage market and facilitate the flow of mortgage credit. Total Treasury also heavily relies on contractors to help administer TARP programs. Treasury uses TARP contracts for a variety of legal, investment consulting, accounting, and other services and supplies. Through fiscal year 2011, Treasury had awarded or used 116 contracts and blanket purchase agreements, up from 81 last year, and about half of them remain active. As shown in table 2, the total obligated value of these contracts has increased 42 percent since 2010, from $109 million to $155 million. About 75 percent of the contracts and blanket purchase agreements are relatively small (less than $1 million each). The two largest contracts are $33 million (with PricewaterhouseCoopers, LLP for internal control services) and $17 million (with Cadwalader, Wickersham & Taft, LLP for legal services). From the outset, Treasury encouraged small and minority- and women- owned businesses to pursue opportunities for TARP contracts and financial agency agreements. The number of contracts and financial agency agreements that went to small and minority-owned businesses increased since 2010 from 16 to 31 (as shown in table 4). Also, 6 of the 17 total financial agency agreements and 25 of the 116 total contracts were with these businesses through 2011. In addition, 73 subcontracts under financial agency agreements and prime contracts went to small and/or minority- and women-owned businesses. As in previous years, the majority of these businesses participating in TARP are subcontractors. As we have reported, when Treasury began to quickly implement TARP initiatives in 2008, OFS had not finalized its procurement oversight procedures and lacked comprehensive internal controls for contractors and financial agents. Further, it did not have a comprehensive compliance system to monitor and fully address vendor-related conflicts of interest. Last year we reported that OFS had put in place an appropriate infrastructure to manage and monitor its network of financial agents and contractors. Specifically, by the end of fiscal year 2010, OFS had: defined organizational roles and responsibilities and established written policies and procedures for the management and oversight of TARP financial agents; taken action to ensure that sufficient personnel were assigned and properly trained to oversee the performance of financial agents and contractors; issued written procedures on measuring the performance of financial agents and installed qualitative and quantitative performance measures for several of its financial agents; and issued regulations on conflicts of interest, established an internal reporting system for tracking all vendor conflict-of-interest certifications, inquiries, and requests for waivers, and completed renegotiations of three contracts that predated the regulations. In fiscal year 2011, Treasury continued to strengthen its policies and procedures for managing financial agents and contractors and conflicts of interest. For example, contract administration personnel made improvements to OFS’s contract record system, including controls and clear deadlines for validating and certifying the completeness and accuracy of the information. According to an OFS official responsible for contracting, contract administration personnel audited most of the items in the record system by tracing the items back to source documents, and found some areas that needed to be improved. Data fields that were used for informational purposes only, such as the contract specialist’s telephone number, were not selected for audit. Fields selected included date of award, contractor, potential contract value, and socioeconomic status. Contract actions were matched against data in the Federal Procurement Data System-Next Generation before deciding whether the items needed to be traced back to source documents. According to the official, new controls were established for adding new contract information to the system and documentation procedures were developed to improve data consistency. The Office of Financial Agents (OFA) also expanded its implementation of performance assessments of financial agents by issuing performance measures and initiating assessments for five additional financial agents, including Fannie Mae. Quarterly performance assessments are now conducted for all of the active financial agents. OFA establishes qualitative and quantitative performance measures, with input from the financial agent, based on the core functions and responsibilities described in each financial agency agreement. OFA staff review financial agents’ performance against the qualitative and quantitative measures and prepare an overall performance assessment. The OFA reviews have identified areas in which a financial agent is performing above expectations or needs improvement. According to an OFA official, the performance reviews have been an important management tool and helped improve compliance through active communication and dialog with the financial agents. For those financial agents eligible to receive incentive payments, the performance reviews can affect the amount of payment. OFA may revise the performance measures annually to ensure continued alignment with the financial agents’ scope of work and OFS priorities. The OIR took several actions to strengthen oversight of conflicts-of- interest requirements over the last year. Specifically, we found the following: OIR began conducting on-site compliance reviews to determine whether financial agents’ internal controls and procedures are working. According to Treasury officials, six reviews were conducted in fiscal year 2011. Treasury found that five of the financial agents reviewed had reasonable internal controls in place. There were no significant findings, although OIR made some recommendations. The review of the remaining financial agent identified significant weaknesses in its controls and in organizational management and oversight. As a result of the review, the relationship with the financial agent was terminated. Thus far, the on-site compliance reviews have been of financial agents, but OIR plans to begin reviewing contractors in the near future. In 2011, OIR began preparing a quarterly conflicts-of-interest feedback report for contractors. The report is shared with the Contracting Officer’s Technical Representatives and included in the contractor performance metrics that are incorporated into Contract and Agreement Review Board reports. OIR’s reports describe and rate contractors’ performance during the quarter in identifying, mitigating, and disclosing conflicts of interest to the Treasury; submitting adequate conflicts-of-interest certifications in a timely manner; and expeditiously responding to requests for additional information, among other things. In 2011, according to OFS’s Compliance Officer, OIR put in place a requirement that all new contractors and financial agents, as well as Contracting Officer’s Technical Representatives and OFA personnel with similar responsibilities, receive conflict-of-interest training. The training materials used are similar to those used before 2011, but the information presented is more consistent across all the training materials than it was before the formalization of the requirement. OIR continued to review a large number of inquiries from financial agents and contractors about potential conflicts of interest. The total reviewed as of September 30, 2011, was about 1,300, compared to about 655 through fiscal year 2010. Reasons given by OIR for the increase in inquiries in fiscal year 2011 compared with prior fiscal years include the addition of several new contractors and financial agents in fiscal year 2011 and the initiation of new processes, such as on-site reviews of entities’ conflicts-of-interest controls. Forty-five of the 1,300 inquiries have resulted in waivers, including 8 waivers during fiscal year 2011. According to OFS’s Compliance Officer, examples of waivers include permitting contractors and financial agents to utilize Office of Government Ethics Form 450 in place of the Form 278 and allowing contractors and financial agents to use their own entertainment and gift policies in place of those in Treasury’s conflicts-of-interest regulation. OIR has never waived an actual or potential conflict of interest. Staffing related to management and oversight of financial agents, contractors, and conflicts of interest has remained stable. However, a temporary loss of contract administration positions occurred when the Procurement Services Division transitioned to the Internal Revenue Service (IRS) in fiscal year 2011 as part of a Treasury-wide consolidation to improve departmental offices’ procurement. Treasury hopes to realize cost savings from the consolidation, improve internal controls and risk management, and enhance employee career development. According to an OFS contract administration official, several procurement positions were lost in the transition to IRS because staff did not want to move to the IRS facility in Oxon Hill, Maryland. IRS has agreed to staff a dedicated team of ten individuals to support OFS, the same level as before the move, and the team is currently being staffed by three federal employees and two contractors, with plans to expand to six federal employees and four contractors. According to the official, the procurement work is a partnership between OFS and IRS, with OFS identifying vendors in conjunction with IRS, IRS awarding the contracts, and OFS and IRS sharing post-award duties, such as managing vendors, invoicing, and keeping records. Although Estimated Lifetime TARP Costs Have Decreased Significantly, Treasury Could Enhance Its Communication about the Costs of TARP While lifetime cost estimates for TARP have decreased since the government first provided assistance in 2008, the lifetime cost and income estimates for specific TARP programs have fluctuated with changes in program activity and the market value of Treasury’s TARP investments. Although Treasury issues several reports on the costs of TARP, its communications about TARP costs in press releases is inconsistent and could be enhanced. Moreover, indirect costs such as moral hazard are also associated with TARP and remain a concern. Estimated Direct TARP Costs Have Decreased Significantly As of September 30, 2011, Treasury has incurred net costs of $28 billion, while recent federal lifetime cost projections for TARP—which include both realized and future cash flows—have decreased. In 2009, the Congressional Budget Office (CBO) estimated that TARP could cost $356 billion. However, CBO’s most recent estimate, using November 2011 data, is approximately $34 billion. Treasury’s fiscal year 2011 financial statement, audited by GAO, reported that TARP would cost around $70 billion as of September 30, 2011, a decrease from about $78 billion estimated as of September 2010. In general, the variation in CBO and Treasury cost estimates is attributable to their timing—that is, market conditions and program activities differed when the estimates were developed. However, program participation assumptions for TARP- funded housing programs explain the large difference between the CBO and Treasury cost estimates. Treasury assumed that all of the $45.6 billion allocated to TARP housing programs would be utilized and, as a result, estimated that they would cost $45.6 billion. Conversely, CBO expected lower participation rates for the housing programs, resulting in a cost estimate of $13 billion as of November 2011. While these differences exist, CBO officials noted that as TARP continues to wind down, Treasury’s and CBO’s lifetime cost estimates should be more similar. This convergence of cost estimates is likely to occur as program costs become clearer and more recipients repay their assistance—reducing the number of outstanding TARP assets and the related uncertainty about how market risks will affect the future value of these investments. In our review of Treasury’s lifetime cost estimates for TARP’s equity investment programs, we found that the estimates for some programs changed only slightly, if at all, between September 2010 and September 2011, while others changed by a notable margin. For example, Treasury estimated that CPP would result in lifetime income of $11.2 billion as of September 2010 and its recent estimate as of September 2011 was slightly higher at $13 billion (see fig. 15). This increase in CPP’s estimated lifetime income was the result of proceeds in excess of costs from the sale of Citigroup common stock offset by a decline in the estimated market value of Treasury’s remaining CPP investments. Additionally, Treasury’s lifetime cost estimate of $45.6 billion for TARP- funded housing programs remained unchanged between September 2010 and September 2011 because Treasury continues to assume that all of the $45.6 billion allocated to the housing programs will be utilized. On the other hand, Treasury’s recent cost estimates for AIFP and assistance to AIG changed markedly when compared to estimates as of September 2010. Specifically, Treasury estimated a lifetime cost of $14.7 billion for AIFP as of September 2010 but that estimate increased to $23.6 billion using September 2011 data due to a decline in the value of Treasury’s equity investments in GM and Ally Financial. Additionally, Treasury’s estimate for assistance to AIG decreased from $36.9 billion to $24.3 billion between September 2010 and September 2011 as a result of improvements in the financial condition of AIG since Treasury first provided assistance and the restructuring of Treasury’s AIG investment to common stock. However, as we have seen, the ultimate cost of the assistance to AIG could be about $11.5 billion after factoring in the estimated lifetime income of $12.8 billion from Treasury’s non-TARP assistance to AIG. As shown, lifetime cost estimates are likely to fluctuate, particularly for investment programs like AIFP and the AIG Investment Program, because future results rely heavily on the market price of common stock. Although Treasury regularly reports on the cost of TARP and its programs, it could improve the clarity and consistency of its communications on TARP costs, specifically in its press releases about specific programs. Treasury issues several reports—including the Agency Financial Report, Monthly 105(a) Reports, and Transaction Reports—that provide updates on the funds obligated and disbursed, repayments and income, and gains and losses. Compared to Treasury’s past reporting practices, recent versions of the Agency Financial Report and the Monthly 105(a) Reports clearly present Treasury’s lifetime cost estimates for TARP and its programs. However, Treasury’s press releases do not consistently include these cost estimates. Rather, Treasury’s press releases on specific TARP programs typically only include transaction- oriented updates, such as disbursements and returns on Treasury’s investments from repayments, dividends, and the sale of its assets. While the transaction-oriented updates in Treasury’s press releases are important, they do not provide the general public with the greater context—the lifetime cost associated with individual programs. Furthermore, it appears that over the last 2 years Treasury has included lifetime cost estimates in some of its program-specific press releases for programs expected to result in a lifetime income, while excluding these estimates for programs expected to result in a cost for taxpayers. For instance, a press release from April 2011 indicated that Treasury’s bank programs were expected to result in a lifetime positive return of approximately $20 billion. Other press releases for TARP banking programs also include this reference to expected lifetime income. However, during the same period Treasury did not include lifetime cost estimates in its press releases for TARP programs that projected a cost to the government, such as SBA 7(a), AIG, and AIFP. For example, Treasury issued a press release in June 2011 that described its sale of several SBA 7(a) securities. Treasury stated that the sale resulted in overall gains and income. The content of this press release implied that the program had earned a significant amount of money but did not provide the more comprehensive lifetime cost estimate for the program, which was $1 million at that time. In addition, over the last 2 years none of Treasury’s press releases for AIG and AIFP (programs expected to cost approximately $24.3 billion and $23.6 billion respectively, as of September 30, 2011) have included the lifetime cost estimates associated with the programs. investment in the programs and revenues received. This inconsistent disclosure of lifetime cost estimates raises concerns about the consistency and transparency of Treasury’s press releases and suggests a selective approach that focuses on reporting program lifetime income and not lifetime costs. As noted earlier in this report, the $24.3 billion is associated with TARP-related assistance. Factoring in income of $12.8 billion for its non-TARP shares could result in a net estimated cost of $11.5 billion. As we have previously reported, transparency is important in the context of TARP and the unprecedented government assistance it provided to the financial sector. In discussing our questions about the press releases with Treasury officials, they noted that they provide cost information in other public reports. However, by improving the clarity of its communication on the costs of TARP through consistently incorporating lifetime cost estimates into its program press releases, Treasury could reduce potential confusion and misunderstanding of TARP’s results. Treasury would also be setting a precedent for cost reporting associated with any future government interventions. Despite Estimated Decreases in TARP Costs, Government Interventions Such as TARP Can Exacerbate Moral Hazard Though direct costs for TARP—including potential lifetime income—can be estimated and quantified, certain indirect costs connected to the government’s assistance are less easily measured. For example, as we have previously reported, when the government provides assistance to the private sector, it may increase moral hazard that would then need to be mitigated. That is, in the face of government assistance, private firms are motivated to take risks they might not take in the absence of such assistance, or creditors may not price into their extensions of credit the full risk assumed by the firm, believing that the government would provide assistance should the firm become distressed. EESA and the amendments made by the American Recovery and Reinvestment Act of 2009 established a number of measures to mitigate the moral hazard of TARP by requiring that participating institutions follow certain requirements. These include providing Treasury with warrants in exchange for TARP funds to allow taxpayers to benefit from any appreciation of the company’s stock, and limiting certain bonuses and golden parachute payments for certain highly compensated employees and senior executive officers, as such payments can encourage excessive risk-taking. Even with such requirements in place, however, government intervention in the private sector can encourage market participants to expect similar emergency actions. This belief diminishes market discipline as it can weaken private or market-based incentives to properly manage risks and can in particular contribute to the perception that some firms are “too big to fail.” Government interventions can also have consequences for the banking industry as a whole, including institutions that do not receive bailout funds. For instance, investors may perceive the debt associated with institutions that received government assistance as being less risky because of the potential for future government bailouts. This perception could lead them to choose to invest in such assisted institutions instead of those that did not receive assistance. However, such effects may be temporary, as evidenced by the recent downgrade by Moody’s Investors Service, Inc. (Moody’s) of the long-term credit ratings of Bank of America Corp. and Wells Fargo & Co. after the Dodd-Frank Act’s new regulatory provisions were enacted into law, which aim to avoid or at least limit future government bailouts to financial institutions. Moody’s stated that it downgraded these credit ratings because it believes the government is less likely to rescue these financial institutions now than it was during the financial crisis. This rating change could affect their ability to access financing with as favorable terms. The Dodd-Frank Act included a number of provisions intended to address the problem of “too big to fail” by strengthening oversight of financial institutions. For example, the Dodd-Frank Act required the Federal Reserve to implement enhanced prudential standards for bank holding companies that are deemed systemically important and increased oversight of certain nonbank financial companies. Specifically, the Federal Reserve has been given supervisory authority over any nonbank financial company that the Financial Stability Oversight Council determines could pose a threat to the financial stability of the country. Also, the Dodd-Frank Act provided new reporting and resolution authorities to the Federal Deposit Insurance Corporation for certain large, systemic financial institutions, and requires those institutions to write plans for their unwinding. However, if these new provisions fail to address the too big to fail phenomenon, future financial crises could emerge that may be similar or worse than the financial meltdown that escalated with the failures of Bear Stearns and Lehman Brothers in 2008. That is, some firms may see the government assistance that was provided during the last crisis as a promise of similar aid in the future and therefore have an incentive to continue engaging in risky activities. Ultimately, any moral hazard effects of the Dodd-Frank Act changes will not be known until financial institutions face another period of financial stress. Conclusions As Treasury continues to unwind most TARP programs, the estimated costs of TARP have decreased significantly from when Treasury first announced TARP. Treasury’s latest estimate of approximately $70 billion as of September 30, 2011, includes a large projection of lifetime income from CPP, and the cost estimates for assistance to AIG and the auto companies continue to fluctuate, demonstrating that such estimates are subject to price movements in the market, among other factors, and could change in the future. We found that Treasury enhanced some of its cost reporting in the past year, although its press releases require improvements. Such communications about specific programs include information about estimated lifetime costs and income only when programs are expected to result in lifetime income and not when they are expected to result in a lifetime cost. This practice does not represent a consistent approach to reporting to the public through press releases on the costs of individual programs. As we have indicated in many past reports on TARP, transparency remains a critical element to the government’s unprecedented assistance to the financial sector. Such transparency helps clarify to the public the costs of TARP assistance and to understand how the government intervened in various markets. Enhancing the transparency and clarity of these press releases will also set a precedent for any future government interventions, should they ever be needed. Recommendation for Executive Action To enhance transparency about the costs of TARP programs as Treasury unwinds its involvement, we recommend that the Secretary of the Treasury enhance Treasury’s communications with the public, in particular Treasury’s press releases, about TARP programs and costs by consistently including information on estimated lifetime costs, especially when reporting on program results. For example, Treasury should consider including lifetime cost estimates, or references to Treasury reports that include such information, in its press releases about specific programs. Agency Comments and Our Evaluation We provided a draft of this report to Treasury for its review and comment. Treasury provided written comments that we have reprinted in appendix III. Treasury also provided technical comments that we have incorporated as appropriate. In its written comments, Treasury agreed with our recommendation that it could further enhance its communications about the costs of TARP programs in its program-specific press releases, also noting that it has established comprehensive accountability and transparency regarding TARP. Treasury stated that it will implement our recommendation by including a link to its Monthly 105(a) Report, which contains cost estimates for each TARP program, in its future program-specific press releases. Implementation of our recommendation through this practice would provide a good opportunity for Treasury to clearly and fully communicate TARP program costs to the public. We are sending copies of this report to the Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, and Treasury. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Orice Williams Brown at (202) 512-8678 or williamso@gao.gov, A. Nicole Clowers at (202) 512-8678 or clowersa@gao.gov, or Thomas J. McCool at (202) 512-2642 or mccoolt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To assess the condition and status of all programs initiated under the Troubled Asset Relief Program (TARP), we collected and analyzed data about program utilization and assets held, as applicable, focusing primarily on financial information that we had audited in the Office of Financial Stability’s (OFS) financial statements, as of September 30, 2011. As noted in the report, in some instances we provided more recent, unaudited financial information. The financial information includes the types of assets held in the program, obligations that represent the highest amount ever obligated for a program (to provide historical information on total obligations), disbursements, and income. We also provide information on program start dates, defining them based on the start of the first activity under a program, and we provide program end dates, based on official announcements or program terms from the Department of the Treasury (Treasury). Finally, we provide approximate program exit dates—either estimated by Treasury or actual if the exit already occurred—that reflect the time when a program will no longer hold assets that need to be managed. We also used OFS cost estimates for TARP that we audited as part of the financial statement audit and reviewed Congressional Budget Office (CBO) cost estimates from publicly available CBO reports. In addition, we tested OFS’s internal controls over financial reporting as it relates to our annual audit of OFS’s financial statements. The financial information used in this report is sufficiently reliable to assess the condition and status of TARP programs based on the results of our audits of fiscal years 2009, 2010, and 2011 financial statements for TARP. We also examined Treasury documentation such as program terms, decision memos, press releases, and reports on TARP programs and costs. Also, we interviewed OFS program officials to determine the current status of each TARP program, the role of TARP staff while most programs continue to unwind, and to update what is known about exit considerations for TARP programs. Other TARP officials we interviewed included those responsible for financial reporting. Additionally, in reporting on these programs and their exit considerations we leveraged our previous TARP reports and publications from the Special Inspector General for TARP and the Congressional Oversight Panel, as appropriate. In addition: For the Capital Purchase Program, we used OFS’ reports to describe the status of the program, including amount of investments outstanding, the number of institutions that had repaid their investments, and the amount of dividends paid, among other things. In addition, we reviewed Treasury’s press releases on the program. We also relied on information that we have collected as part of our ongoing review of the financial condition of Capital Purchase Program institutions. For the Community Development Capital Initiative, we interviewed program officials to determine how the program is managed and what repayment or exit concerns Treasury has for the program. To update the status of the Automotive Industry Financing Program (AIFP) and Treasury’s plans for managing its investment in the companies, we leveraged our past work; reviewed information on Treasury’s exit from Chrysler, including Chrysler and Treasury press releases; reviewed information on Treasury’s plans for overseeing its remaining financial interests in General Motors (GM) and Ally Financial, including Administration and Treasury reports. To obtain information on the current financial condition of the companies, we reviewed information on GM’s and Ally Financial’s finances and operations, including financial statements and industry analysts’ reports. To update the status of the American International Group, Inc. (AIG) Investment Program (formerly the Systemically Significant Failing Institutions Program) we reviewed relevant documents from Treasury and other parties. For the AIG Investment Program, these documents included 105(a) reports provided periodically to Congress by Treasury, as well as reports produced by the Board of Governors of the Federal Reserve System, and the Federal Reserve Bank of New York, and other relevant documentation such as AIG’s financial disclosures and Treasury’s press releases. We also interviewed officials from each of these agencies and AIG. For the Small Business Administration (SBA) 7(a) Securities Purchase Program, we analyzed data on Treasury purchases and dispositions of SBA 7(a) securities collected during our financial audit. We also reviewed decision memos on the disposition of the SBA 7(a) portfolio. In addition, we reviewed press releases about the program’s sales activity and income. We reviewed SBA 7(a) loan volume data provided by Treasury and compared that to trends in our past reports related to SBA 7(a) lending and we also interviewed program staff about the status of the programs and plans for future sales. For the Term Asset-Backed Securities Loan Facility (TALF), we reviewed program terms and requested data from Treasury about loan prepayments and TALF LLC activity. We also researched trends in the values of commercial mortgage-backed securities. Additionally, we interviewed OFS officials about their role in the program as it continues to unwind. To update the status of the Public-Private Investment Program, we analyzed program quarterly reports, term sheets, and other documentation related to the public-private investment funds. We also interviewed OFS staff responsible for the program to determine the status of the program while it remains in active investment status. To determine the status of Treasury’s TARP-funded housing programs, we obtained and reviewed Treasury’s published reports on the programs and servicer performance, documentation on projected cost estimates and disbursements for each of the programs, and guidelines and related updates issued by Treasury for each of the programs. In addition, we obtained information from and interviewed Treasury officials about the status of the TARP-funded housing programs, including numbers of borrowers helped and the actions Treasury had taken to address our prior recommendations. To obtain the final status for three programs that Treasury exited and for which Treasury no longer holds assets that it must manage—the Asset Guarantee Program, Capital Assistance Program, and Targeted Investment Program—we reviewed Treasury’s recent reports and leveraged our past work. To determine the proportion of permanent, term, and detailee staff in OFS, we reviewed program data showing changes in the number of staff over time and in each OFS office. We assessed this staffing data for reliability by comparing it to organizational directories to ensure that the changes were generally equivalent. We determined that the staffing data was sufficiently reliable to show trends in OFS staffing. We also interviewed agency officials to gain insight into the trends. Additionally, we obtained program-specific staffing information from agency officials during interviews to inform our discussion of the staffing needs of each TARP program and any succession planning undertaken by OFS. Also, we reviewed OFS documentation, such as the organizational directories, to analyze any changes in leadership positions in OFS. To assess the staffing challenges of OFS as TARP continues to wind down, we reviewed past GAO reports and recommendations and the OFS staffing and development plan, and we interviewed agency officials. To assess OFS’s use of financial agents and contractors since TARP was established in October 2008, we reviewed information on financial agents and contractors from OFS’s contract record system and interviewed Treasury contract officials about financial agency agreements, contracts, and blanket purchase agreements as of September 30, 2011, that support TARP administration and programs. We analyzed information from the contract record system to update key details on the status of TARP financial agents and contractors, such as total number of agreements and contracts, type of services being performed, obligated values, periods of performance, and share of work by small businesses. Through discussions with Treasury officials responsible for the contract record system and inquiries we made about selected data items, as well as matching OFS’s contract list against data we obtained from the Federal Procurement Data System-Next Generation, we determined that data in the record system were sufficiently reliable for our purposes. To assess OFS’s progress in strengthening its infrastructure for managing and overseeing the performance of TARP financial agents and contractors and addressing conflicts of interest that could arise with the use of private sector firms, we reviewed various documents and interviewed OFS officials about changes in fiscal year 2011 to its policies and procedures regarding (1) management and oversight of TARP financial agents and contractors and (2) monitoring and oversight activities by the OFS team responsible for financial agent and contractor compliance with TARP conflicts-of-interest requirements. We did not review financial agents’ performance assessments or incentive payments. To ascertain what is known about TARP costs, we reviewed the cost reporting of CBO, the Office of Management and Budget (OMB), and Treasury, including the credit reform accounting methods used to develop cost estimates for TARP programs. For our analysis we focused on Treasury’s cost estimates for the following reasons: (1) Treasury’s recent financial statements and cost projections have been audited by GAO and (2) estimates reported by OMB are based on numbers provided by Treasury.methods used to calculate TARP costs and the reasons for any significant differences among the cost estimates calculated by each agency. We utilized data from our financial audit and leveraged other internal resources related to credit reform accounting and the modeling of TARP costs. We also reviewed Treasury’s press releases on the costs of TARP. For our review of the moral hazards of TARP, we reviewed pertinent legislation such as the Emergency Economic Stabilization Act and the Dodd-Frank Wall Street Reform and Consumer Protection Act and utilized previous GAO reports and Congressional Oversight Panel publications. We interviewed officials from CBO and Treasury on the We conducted this performance audit from June 2011 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Information on Programs Treasury Has Exited This appendix includes information about TARP programs that Treasury has exited and for which Treasury no longer holds assets to manage. We provide an overview of the purpose of these programs, when they started and ended, the status of funding, and the final lifetime costs or income of the programs, as applicable. Asset Guarantee Program The Asset Guarantee Program was established as the Treasury insurance program, which provided federal government assurances for assets held by financial institutions that were deemed critical to the functioning of the U.S. financial system. Citigroup and Bank of America were the only two institutions that participated in this program before it was terminated. As previously reported, Bank of America paid Treasury and others a fee for terminating the term sheet before any assets were segregated. Treasury sold the remaining assets that it held related to this program in January 2011 with the sale of Citigroup warrants, though it could receive future monies from trust preferred stock held by the Federal Deposit Insurance Corporation. Treasury reports that lifetime income from terminating the Bank of America agreement and exiting Citigroup-related assets is $3.7 billion (see fig. 16). Targeted Investment Program The Targeted Investment Program was designed to foster market stability and thereby strengthen the economy by investing in institutions on a case-by-case basis that Treasury deemed critical to the functioning of the financial system. Only two institutions—Bank of America and Citigroup— participated in this program, and each received $20 billion in capital investment, which both repaid in December 2009. Treasury auctioned the Bank of America warrant that it received under the Targeted Investment Program in March 2010. Treasury auctioned the Citigroup warrant in January 2011. Treasury reports that lifetime income for this program totals $4 billion (see fig. 17). Capital Assistance Program The Capital Assistance Program was designed to further improve confidence in the banking system by helping ensure that the largest 19 U.S. bank holding companies had sufficient capital to cushion themselves against larger than expected future losses, as determined by the Supervisory Capital Assessment Program—or “stress test”—conducted by the federal banking regulators. The Capital Assistance Program was announced in February 2009 and ended in November 2009. It was never utilized. Appendix III: Comments from the Department of the Treasury Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Gary Engel, Mathew J. Scirè, and William T. Woods (lead Directors); Marcia Carlsen, Lawrance Evans, Jr., Dan Garcia-Diaz, Lynda Downing, Kay Kuhlman, Harry Medina, Joseph O’Neill, John Oppenheim, Raymond Sendejas, and Karen Tremba (lead Assistant Directors); Emily Chalmers; Rachel DeMarcus; John Forrester; Christopher Forys; Heather Krause; Robert Lee; Aaron Livernois; Dragan Matic; Emily Owens; Erin Schoening; and Mel Thomas have made significant contributions to this report. Related GAO Products Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2011 and 2010 Financial Statements. GAO-12-169. Washington, D.C.: November 10, 2011. Troubled Asset Relief Program: Status of GAO Recommendations to Treasury. GAO-11-906R. Washington, D.C.: September 16, 2011. Troubled Asset Relief Program: The Government’s Exposure to AIG Following the Company’s Recapitalization. GAO-11-716. Washington, D.C.: July 28, 2011. Troubled Asset Relief Program: Results of Housing Counselors Survey on Borrowers’ Experiences with the Home Affordable Modification Program. GAO-11-367R. Washington, D.C.: May 26, 2011. Troubled Asset Relief Program: Survey of Housing Counselors about the Home Affordable Modification Program, an E-supplement to GAO-11-367R. GAO-11-368SP. Washington, D.C.: May 26, 2011. TARP: Treasury’s Exit from GM and Chrysler Highlights Competing Goals, and Results of Support to Auto Communities Are Unclear. GAO-11-471. Washington, D.C.: May 10, 2011. Management Report: Improvements Are Needed in Internal Control Over Financial Reporting for the Troubled Asset Relief Program. GAO-11-434R. Washington, D.C.: April 18, 2011. Troubled Asset Relief Program: Status of Programs and Implementation of GAO Recommendations. GAO-11-476T. Washington, D.C.: March 17, 2011. Troubled Asset Relief Program: Treasury Continues to Face Implementation Challenges and Data Weaknesses in Its Making Home Affordable Program. GAO-11-288. Washington, D.C.: March 17, 2011. Troubled Asset Relief Program: Actions Needed by Treasury to Address Challenges in Implementing Making Home Affordable Programs. GAO-11-338T. Washington, D.C.: March 2, 2011. Troubled Asset Relief Program: Third Quarter 2010 Update of Government Assistance Provided to AIG and Description of Recent Execution of Recapitalization Plan. GAO-11-46. Washington, D.C.: January 20, 2011. Troubled Asset Relief Program: Status of Programs and Implementation of GAO Recommendations. GAO-11-74. Washington, D.C.: January 12, 2011. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2010 and 2009 Financial Statements. GAO-11-174. Washington, D.C.: November 15, 2010. Troubled Asset Relief Program: Opportunities Exist to Apply Lessons Learned from the Capital Purchase Program to Similarly Designed Programs and to Improve the Repayment Process. GAO-11-47. Washington, D.C.: October 4, 2010. Troubled Asset Relief Program: Bank Stress Test Offers Lessons as Regulators Take Further Actions to Strengthen Supervisory Oversight. GAO-10-861. Washington, D.C.: September 29, 2010. Financial Assistance: Ongoing Challenges and Guiding Principles Related to Government Assistance for Private Sector Companies. GAO-10-719. Washington, D.C.: August 3, 2010. Troubled Asset Relief Program: Continued Attention Needed to Ensure the Transparency and Accountability of Ongoing Programs. GAO-10-933T. Washington, D.C.: July 21, 2010. Management Report: Improvements are Needed in Internal Control Over Financial Reporting for the Troubled Asset Relief Program. GAO-10-743R. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Treasury’s Framework for Deciding to Extend TARP Was Sufficient, but Could be Strengthened for Future Decisions. GAO-10-531. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Programs. GAO-10-634. Washington, D.C.: June 24, 2010. Debt Management: Treasury Was Able to Fund Economic Stabilization and Recovery Expenditures in a Short Period of Time, but Debt Management Challenges Remain. GAO-10-498. Washington, D.C.: May 18, 2010. Troubled Asset Relief Program: Update of Government Assistance Provided to AIG. GAO-10-475. Washington, D.C.: April 27, 2010. Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future. GAO-10-492. Washington, D.C.: April 6, 2010. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T. Washington, D.C.: March 25, 2010. Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility. GAO-10-25. Washington, D.C.: February 5, 2010. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C.: December 16, 2009. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Year 2009 Financial Statements. GAO-10-301. Washington, D.C.: December 9, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Troubled Asset Relief Program: One Year Later, Actions Are Needed to Address Remaining Transparency and Accountability Challenges. GAO-10-16. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through September 25, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of September 18, 2009. GAO-10-24SP. Washington, D.C.: October 8, 2009. Debt Management: Treasury Inflation Protected Securities Should Play a Heightened Role in Addressing Debt Management Challenges. GAO-09-932. Washington, D.C.: September 29, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-920T. Washington, D.C.: July 22, 2009. Troubled Asset Relief Program: Status of Participants’ Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through May 29, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of June 1, 2009. GAO-09-707SP. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008.
The Emergency Economic Stabilization Act of 2008 authorized the Department of the Treasury (Treasury) to create the Troubled Asset Relief Program (TARP), a $700 billion program designed to restore the liquidity and stability of the financial system. The act also requires that GAO report every 60 days on TARP activities. This report examines (1) the condition and status of TARP programs; (2) Treasury’s management of TARP operations, including staffing for the Office of Financial Stability (OFS) and oversight of contractors and financial agents; and (3) what is known about the direct and indirect costs of TARP. To do this work, GAO analyzed audited financial data for various TARP programs; reviewed documentation such as program terms and internal decision memos; analyzed TARP cost estimates from the Congressional Budget Office (CBO), the Office of Management and Budget, and Treasury; and interviewed CBO and OFS officials. Many TARP programs continue to be in various stages of unwinding and some programs, notably those that focus on the foreclosure crisis, remain active. The figure provides an overview of selected programs and the amount disbursed and outstanding, as applicable. Treasury has articulated broad principles for exiting TARP, including exiting TARP programs as soon as practicable and seeking to maximize taxpayer returns, goals that at times conflict. Some of the programs that Treasury continues to unwind, such as investments in American International Group, Inc. (AIG), require Treasury to actively manage the timing of its exit as it balances its competing goals. For other programs, such as the Capital Purchase Program (CPP)—which was created to provide capital to financial institutions—Treasury’s exit will be driven primarily by the financial condition of the participating institutions. Consequently, the timing of Treasury’s exit from TARP remains uncertain. Treasury continues to manage the various TARP programs using OFS staff, financial agents, and contractors. Overall OFS staffing has declined slightly for the first time as staff responsible for managing TARP investment programs and those in term-appointed leadership positions have departed. However, staff in some offices within OFS have increased—for example, in the Office of Internal Review, which helps to ensure that financial agents and contractors comply with laws and regulations. Through September 30, 2011, about half of Treasury’s 116 contracts remained active, along with 14 of the 17 financial agency agreements. Treasury has continued to strengthen its management and oversight of contractors and financial agents and conflict-of-interest requirements. In response to a GAO recommendation, OFS has finalized a plan to address staffing levels and expertise that includes identifying critical positions and conducting succession planning, in light of the temporary nature of its work. Treasury and CBO project that TARP costs will be much lower than the amount authorized when the program was initially announced. Treasury’s fiscal year 2011 financial statement, audited by GAO, estimated that the lifetime cost of TARP would be about $70 billion—with CPP expected to generate the most lifetime income, or net income in excess of costs. OFS also reported that from inception through September 30, 2011, the incurred cost of TARP transactions was $28 billion. Although Treasury regularly reports on the cost of TARP programs and has enhanced such reporting over time, GAO’s analysis of Treasury press releases about specific programs indicate that information about estimated lifetime costs and income are included only when programs are expected to result in lifetime income. For example, Treasury issued a press release for its bank investment programs, including CPP, and noted that the programs would result in lifetime income, or profit. However, press releases for investments in AIG, a program that is anticipated to result in a lifetime cost to Treasury, did not include program-specific cost information. Although press releases for programs expected to result in a cost to Treasury provide useful transaction information, they exclude lifetime, program-specific cost estimates. Consistently providing greater transparency about cost information for specific TARP programs could help reduce potential misunderstanding of TARP’s results. While Treasury can measure and report direct costs, indirect costs associated with the moral hazard created by the government’s intervention in the private sector are more difficult to measure and assess.
Background Information technology should enable government to better serve the American people. However, despite spending hundreds of billions on IT since 2000, the federal government has experienced failed IT projects and has achieved little of the productivity improvements that private industry has realized from IT. Too often, federal IT projects run over budget, behind schedule, or fail to deliver results. In combating this problem, proper oversight is critical. Both OMB and federal agencies have key roles and responsibilities for overseeing IT investment management, and OMB is responsible for working with agencies to ensure investments are appropriately planned and justified. However, as we have described in numerous reports, although a variety of best practices exist to guide their successful acquisition, federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. Agencies have reported that poor-performing projects have often used a “big bang” approach—that is, projects that are broadly scoped and aim to deliver capability several years after initiation. For example, in 2009 the Defense Science Board reported that the Department of Defense’s (Defense) acquisition process for IT systems was too long, ineffective, and did not accommodate the rapid evolution of IT. The board reported that the average time to deliver an initial program capability for a major IT system acquisition at Defense was over 7 years. Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. As reported to OMB, federal agencies plan to spend more than $82 billion on IT investments in fiscal year 2014, which is the amount expended for not only acquiring such investments, but also the funding to operate and maintain them. Of the reported amount, 27 federal agencies plan to spend about $75 billion: $17 billion on development and acquisition and $58 billion on operations and maintenance (O&M). Figure 1 shows the percentages of total planned spending for 2014 for the $75 billion spent on development and O&M. However, this $75 billion does not reflect the spending of the entire federal government. We have previously reported that OMB’s figure understates the total amount spent in IT investments. Specifically, it does not include IT investments by 58 independent executive branch agencies, including the Central Intelligence Agency or by the legislative or judicial branches. Further, agencies differed on what they considered an IT investment; for example, some have considered research and development systems as IT investments, while others have not. As a result, not all IT investments are included in the federal government’s estimate of annual IT spending. OMB provided guidance to agencies on how to report on their IT investments, but this guidance did not ensure complete reporting or facilitate the identification of duplicative investments. Consequently, we recommended, among other things, that OMB improve its guidance to agencies on identifying and categorizing IT investments. In September 2011, we reported that the results of OMB initiatives to identify potentially duplicative investments were mixed and that several federal agencies did not routinely assess their entire IT portfolios to identify and remove or consolidate duplicative systems. In particular, we said that most of OMB’s recent initiatives had not yet demonstrated results, and several agencies did not routinely assess legacy systems to determine if they were duplicative. As a result, we recommended that OMB require federal agencies to report the steps they take to ensure that their IT investments are not duplicative as part of their annual budget and IT investment submissions. OMB generally agreed with this recommendation and has since taken action to implement it. Specifically, in March 2012, OMB issued a memorandum to federal agencies regarding its PortfolioStat initiative, which is discussed in more detail in the following section. Further, over the past several years, we have reported that overlap and fragmentation among government programs or activities could be harbingers of unnecessary duplication. Thus, the reduction or elimination of duplication, overlap, or fragmentation could potentially save billions of tax dollars annually and help agencies provide more efficient and effective services. OMB Has Launched Major Initiatives for Overseeing Investments OMB has implemented a series of initiatives to improve the oversight of underperforming investments, more effectively manage IT, and address duplicative investments. These efforts include the following: IT Dashboard. Given the importance of transparency, oversight, and management of the government’s IT investments, in June 2009, OMB established a public website, referred to as the IT Dashboard, that provides detailed information on 759 major IT investments at 27 federal agencies, including ratings of their performance against cost and schedule targets. The public dissemination of this information is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold agencies accountable for results and performance. Among other things, agencies are to submit Chief Information Officer (CIO) ratings, which, according to OMB’s instructions, should reflect the level of risk facing an investment on a scale from 1 (high risk) to 5 (low risk) relative to that investment’s ability to accomplish its goals. Ultimately, CIO ratings are assigned colors for presentation on the Dashboard, according to the five-point rating scale, as illustrated in table 1. As of June 2014, according to the IT Dashboard, 183 of the federal government’s 759 major IT investments—totaling $10 billion—were in need of management attention (rated “yellow” to indicate the need for attention or “red” to indicate significant concerns). (See fig. 2.) TechStat reviews. In January 2010, the Federal CIO began leading TechStat sessions—face-to-face meetings to terminate or turnaround IT investments that are failing or are not producing results. These meetings involve OMB and agency leadership and are intended to increase accountability and transparency and improve performance. Subsequently, OMB empowered agency CIOs to hold their own TechStat sessions within their respective agencies. According to the former Federal CIO, the efforts of OMB and federal agencies to improve management and oversight of IT investments have resulted in almost $4 billion in savings. Federal Data Center Consolidation Initiative. Concerned about the growing number of federal data centers, in February 2010 the Federal CIO established the Federal Data Center Consolidation Initiative. This initiative’s four high-level goals are to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. OMB believes that this initiative has the potential to provide about $3 billion in savings by the end of 2015. IT Reform Plan. In December 2010, OMB released its 25 point plan to reform federal IT. This document established an ambitious plan for achieving operational efficiencies and effectively managing large- scale IT programs. In particular, as part of an effort to reduce the risk associated with IT acquisitions, the plan calls for federal IT programs to deploy capabilities or functionality in release cycles no longer than 12 months, and ideally, less than 6 months. The plan also identifies key actions that can help agencies implement this incremental development guidance, such as working with Congress to develop IT budget models that align with incremental development and issuing contracting guidance and templates to support incremental development. PortfolioStat. In order to eliminate duplication, move to shared services, and improve portfolio management processes, in March 2012, OMB launched the PortfolioStat initiative. Specifically, PortfolioStat requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how their IT investments align with the agency’s mission and business functions. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT investment management process, making decisions on eliminating duplicative investments, and moving to shared solutions in order to maximize the return on IT investments across the portfolio. OMB believes that the PortfolioStat effort has the potential to save the government $2.5 billion over the next 3 years by, for example, consolidating duplicative systems. Opportunities Exist to Improve Acquisition and Management of IT Investments Given the magnitude of the federal government’s annual IT budget, which is expected to be more than $82 billion in fiscal year 2014, it is important that agencies leverage all available opportunities to ensure that their IT investments are acquired in the most effective manner possible. To do so, agencies can rely on IT acquisition best practices, incremental development, and initiatives such as OMB’s IT Dashboard and OMB- mandated TechStat sessions. Additionally, agencies can save billions of dollars by continuing to consolidate federal data centers and by eliminating duplicative investments through OMB’s PortfolioStat initiative. Best Practices Are Intended to Help Ensure Successful Major Acquisitions In 2011, we identified seven successful acquisitions and nine common factors critical to their success and noted that (1) the factors support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government and (2) wide dissemination of these factors could complement OMB’s efforts. Specifically, we reported that federal agency officials identified seven successful acquisitions, in that they best achieved their respective cost, schedule, scope, and performance goals. Notably, all of these were smaller increments, phases, or releases of larger projects. The common factors critical to the success of three or more of the seven acquisitions are generally consistent with those developed by private industry and are identified in table 2. These critical factors support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government; wide dissemination of these factors could complement OMB’s efforts. IT Dashboard Can Improve the Transparency into and Oversight of Major IT Investments The IT Dashboard serves an important role in allowing OMB and other oversight bodies to hold agencies accountable for results and performance. However, we have issued a series of reports highlighting deficiencies with the accuracy and reliability of the data reported on the Dashboard. For example, we reported in October 2012 that Defense had not rated any of its investments as either high or moderately high risk and that, in selected cases, these ratings did not appropriately reflect significant cost, schedule, and performance issues reported by us and others. We recommended that Defense ensure that its CIO ratings reflect available investment performance assessments and its risk management guidance. Defense concurred and has revised its process to address these concerns. Further, while we reported in 2011 that the accuracy of Dashboard cost and schedule data had improved over time, more recently, in December 2013, we found that agencies had removed investments from the Dashboard by reclassifying their investments—representing a troubling trend toward decreased transparency and accountability. Specifically, the Department of Energy reclassified several of its supercomputer investments from IT to facilities and the Department of Commerce decided to reclassify its satellite ground system investments. Additionally, as of December 2013, the public version of the Dashboard was not updated for 15 of the previous 24 months because OMB does not revise it as the President’s budget request is being prepared. We also found that, while agencies experienced several issues with reporting the risk of their investments, such as technical problems and delayed updates to the Dashboard, the CIO ratings were mostly or completely consistent with investment risk at seven of the eight selected agencies. Additionally, the agencies had already addressed several of the discrepancies that we identified. The final agency, the Department of Veterans Affairs (VA), did not update 7 of its 10 selected investments because it elected to build, rather than buy, the ability to automatically update the Dashboard and has now resumed updating all investments. To their credit, agencies’ continued attention to reporting the risk of their major IT investments supports the Dashboard’s goal of providing transparency and oversight of federal IT investments. Nevertheless, the rating issues that we identified with performance reporting and annual baselining, some of which are now corrected, serve to highlight the need for agencies’ continued attention to the timeliness and accuracy of submitted information in order to allow the Dashboard to continue to fulfill its stated purpose. We recommended that agencies appropriately categorize IT investments and that OMB make Dashboard information available independent of the budget process. OMB neither agreed nor disagreed with these recommendations. Six agencies generally agreed with the report or had no comments and two others did not agree, believing their categorizations were appropriate. We continue to believe that our recommendations are valid. Agencies Need to Establish and Implement Incremental Development Policies to Better Achieve Cost, Schedule, and Performance Goals for IT Investments Incremental development can help agencies to effectively manage IT acquisitions and, as such, OMB has recently placed a renewed emphasis on it. In particular, in 2010 OMB called for IT investments to deliver functionality every 12 months, and since 2012 has required investments to deliver functionality every 6 months. However, as discussed in our recent report, most selected agencies had not effectively established and implemented incremental development approaches. Specifically, although all five agencies in our review—the Departments of Defense, Health and Human Services (HHS), Homeland Security (DHS), Transportation (Transportation), and VA—had established policies that address incremental development, the policies usually did not fully address three key components we identified for implementing OMB’s guidance. Table 3 provides an assessment of each agency’s policies against the three key components of an incremental development policy. Among other things, agencies cited the following reasons that contributed to these weaknesses: (1) OMB’s guidance was not feasible because not all types of investments should deliver functionality in 6 months and (2) the guidance did not identify what agencies’ policies are to include or time frames for completion. We agreed that these concerns have merit. Additionally, the weaknesses in agency policies enabled inconsistent implementation of incremental development approaches. Specifically, almost three-quarters of the selected investments we reviewed did not plan to deliver functionality every 6 months and less than half planned to deliver functionality in 12-month cycles. Table 4 shows how many of the selected investments at each agency planned on delivering functionality every 6 and 12 months during fiscal years 2013 and 2014. Considering agencies’ concerns about delivering functionality every 6 months and given that so few are planning to deliver functionality in that time frame, our report noted that delivering functionality every 12 months, consistent with OMB’s IT Reform Plan, would be an appropriate starting point and a substantial improvement. Until OMB issues realistic and clear guidance and agencies update their policies to reflect this guidance, agencies may not consistently adopt incremental development approaches, and IT expenditures will continue to produce disappointing results—including sizable cost overruns and schedule slippages and questionable progress in meeting mission goals and outcomes. We recommended that OMB develop and issue realistic and clear guidance on incremental development, and that Defense, HHS, DHS, and Transportation update and implement their incremental development policies, once OMB’s guidance is made available. OMB stated that it agreed with our recommendation to update and issue incremental development guidance, but did not agree that its current guidance is not realistic. However, slightly more than one-fourth of selected investments planned to deliver functionality every 6 months—and less than one-half planned to do so every 12 months. Additionally, there were three types of investments for which it may not always be practical or necessary to expect functionality to be delivered in 6-month cycles. Thus, we continued to believe that delivering functionality every 6 months is not an appropriate requirement for all agencies and that requiring the delivery of functionality every 12 months, consistent with OMB’s IT Reform Plan, is a more appropriate starting point. We therefore maintained that OMB should require projects associated with major IT investments to deliver functionality at least every 12 months. Four agencies—Defense, HHS, DHS, and VA—generally agreed with the report or had no comments and one agency—Transportation—did not agree that its recommendation should be dependent on OMB first taking action. Specifically, the department explained that relying on another agency to concur with one of our recommendations before Transportation can take action leaves the department with the potential challenge of a recommendation that cannot be implemented. However, as previously stated, OMB agreed with our recommendation to update and issue incremental guidance, meaning that OMB committed to taking the actions necessary to enable Transportation to begin addressing our recommendation. Accordingly, we continued to believe that our recommendations were warranted and can be implemented. TechStat Reviews Can Help Highlight and Evaluate Poorly Performing Investments TechStat reviews were initiated by OMB to enable the federal government to turnaround, halt, or terminate IT projects that are failing or are not producing results. In 2013, we reported that OMB and selected agencies had held multiple TechStats, but that additional OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects and that resulting cost savings were valid. Specifically, we determined that, as of April 2013, OMB reported conducting 79 TechStats, which focused on 55 investments at 23 federal agencies. Further, four selected agencies—the Departments of Agriculture, Commerce, HHS, and DHS—conducted 37 TechStats covering 28 investments. About 70 percent of the OMB-led and 76 percent of agency-led TechStats on major investments were considered medium to high risk at the time of the TechStat. However, the number of at-risk TechStats held was relatively small compared to the current number of medium- and high-risk major IT investments. Specifically, the OMB-led TechStats represented roughly 18.5 percent of the investments across the government that had a medium- or high-risk CIO rating. For the four selected agencies, the number of TechStats represented about 33 percent of the investments that have a medium- or high-risk CIO rating. We concluded that, until OMB and agencies develop plans to address these weaknesses, the investments would likely remain at risk. In addition, we reported that OMB and selected agencies had tracked and reported positive results from TechStats, with most resulting in improved governance. Agencies also reported projects with accelerated delivery, reduced scope, or termination. We also found that OMB reported in 2011 that federal agencies achieved almost $4 billion in life-cycle cost savings as a result of TechStat sessions. However, we were unable to validate OMB’s reported results because OMB did not provide artifacts showing that it ensured the results were valid. Among other things, we recommended that OMB require agencies to report on how they validated the outcomes. OMB generally agreed with this recommendation. Continued Oversight Needed to Consolidate Federal Data Centers and Achieve Cost Savings In an effort to consolidate the growing number of federal data centers, in 2010, OMB launched a consolidation initiative intended to close 40 percent of government data centers by 2015, and, in doing so, save $3 billion. Since 2011, we have issued a series of reports on the efforts of agencies to consolidate their data centers. For example, in July 2011 and July 2012, we reported that agencies had developed plans to consolidate data centers; however, these plans were incomplete and did not include best practices. In addition, although we reported that agencies had made progress on their data center closures, OMB had not determined initiative-wide cost savings, and oversight of the initiative was not being performed in all key areas. Among other things, we recommended that OMB track and report on key performance measures, such as cost savings to date, and improve the execution of important oversight responsibilities. We also recommended that agencies complete inventories and plans. OMB agreed with these two recommendations, and most agencies agreed with our recommendations to them. Additionally, as part of ongoing follow-up work, we have determined that while agencies had closed data centers, the number of federal data centers was significantly higher than previously estimated by OMB. Specifically, as of May 2013, agencies had reported closing 484 data centers by the end of April 2013 and were planning to close an additional 571 data centers—for a total of 1,055—by September 2014. However, as of July 2013, 22 of the 24 agencies participating in the initiative had collectively reported 6,836 data centers in their inventories— approximately 3,700 data centers more than OMB’s previous estimate from December 2011. This dramatic increase in the count of data centers highlights the need for continued oversight of agencies’ consolidation efforts. We have ongoing work looking at OMB’s data center consolidation initiative, including evaluating the extent to which agencies have achieved planned cost savings through their consolidation efforts, identifying agencies’ notable consolidation successes and challenges in achieving cost savings, and evaluating the extent to which data center optimization metrics have been established. Agencies’ PortfolioStat Efforts Have the Potential to Save Billions of Dollars OMB launched the PortfolioStat initiative in March 2012, which required 26 executive agencies to, among other things, reduce commodity IT spending and demonstrate how their IT investments align with the agencies’ mission and business functions. In March 2013, OMB issued a memorandum commencing the second iteration of its PortfolioStat initiative and strengthening IT portfolio management. In November 2013, we reported on agencies’ efforts to complete key required PortfolioStat actions and make portfolio improvements. We noted that all 26 agencies that were required to implement the PortfolioStat initiative took actions to address OMB’s requirements. However, there were shortcomings in their implementation of selected requirements, such as addressing all required elements of an action plan to consolidate commodity IT and migrating two commodity areas to a shared service by the end of 2012. Further, we found that several agencies had weaknesses in selected areas, such as the CIO’s authority to review and approve the entire portfolio. While OMB had issued guidance and required agencies to report on actions taken to implement CIO authorities, it was not sufficient to address the issue. For example, although HHS reported having a formal memo in place outlining the CIO’s authority and ability to review the entire IT portfolio, it also noted that the CIO had limited influence and ability to recommend changes to it. Similarly, the Office of Personnel Management reported that the CIO advises the Director, who approves the IT portfolio, but this role was not explicitly defined. As a result of OMB’s insufficient guidance, agencies were hindered in addressing certain responsibilities set out in the Clinger- Cohen Act of 1996, which established the position of CIO to advise and assist agency heads in managing IT investments. We also observed that OMB’s estimate of about 100 consolidation opportunities and a potential $2.5 billion in savings from the PortfolioStat initiative was understated because, among other things, it did not include estimates from Defense and the Department of Justice. Our analysis, which included these estimates, showed that collectively the 26 agencies reported about 200 opportunities and at least $5.8 billion in potential savings through fiscal year 2015—at least $3.3 billion more than the number initially reported by OMB. We made more than 50 recommendations to improve agencies’ implementation of PortfolioStat requirements. We also recommended that OMB require agencies to fully disclose limitations with respect to CIO authority. OMB partially agreed with our recommendations, and responses from 20 of the agencies commenting on the report varied. Last month, we also reported on OMB’s and agencies’ policies and management of software licenses—one PortfolioStat focus area. We found that OMB’s PortfolioStat policy did not guide agencies in developing comprehensive license management policies, and of the 24 major federal agencies, 2 had comprehensive policies for managing enterprise software license agreements; 18 had them but they were not comprehensive; and 4 had not developed any. The weaknesses in agencies’ policies were due, in part, to the lack of a priority for establishing software license management practices—such as whether agencies’ employed a centralized approach to software license management and established a comprehensive inventory of the software licenses—and a lack of direction from OMB. Table 5 lists the leading practices and the number of agencies that had fully, partially, or not implemented them. Additionally, the inadequate implementation of leading practices in software license management, such as centralized management and a comprehensive inventory, was partially due to weaknesses in agencies’ policies. As a result, we noted that agencies’ oversight of software license spending was limited or lacking, and they may miss out on savings. The potential savings could be significant considering that, in fiscal year 2012, DHS reported saving approximately $181 million by consolidating its enterprise license agreements. We also stated that agencies lacked comprehensive software license inventories that were regularly tracked and maintained. Of the 24 agencies, 2 had a comprehensive inventory of software licenses; 20 had some form of an inventory; and 2 did not have any inventory of their software licenses purchased. We recommended that OMB issue a directive to help guide agencies in managing licenses and made more than 130 recommendations to the 24 agencies to improve their policies and practices for managing licenses. OMB disagreed with the need for a directive. However, until this gap in guidance is addressed, agencies will likely continue to lack the visibility into what needs to be managed, and be unable to take full advantage of OMB’s tools to drive license efficiency and utilization. Most agencies generally agreed with the recommendations or had no comments. We have ongoing work looking at the second iteration of OMB’s PortfolioStat initiative, including identifying action items and associated time frames from joint OMB-agency PortfolioStat meetings, determining agencies’ progress in addressing these action items, and evaluating the extent to which agencies have realized planned savings. In summary, OMB’s and agencies’ recent efforts have resulted in greater transparency and oversight of federal spending, but continued leadership and attention are necessary to build on the progress that has been made. The expanded use of the common factors critical to the successful management of large-scale IT acquisitions should result in more effective delivery of mission-critical systems. Additionally, federal agencies need to continue to improve the accuracy and availability of information on the Dashboard to provide greater transparency and even more attention to the billions of dollars invested in troubled projects. Further, agencies need to implement incremental development approaches in order to increase the likelihood that major IT investments meet their cost, schedule, and performance goals. Additionally, agencies should conduct additional TechStat reviews to focus management attention on troubled projects and establish clear action items to turn the projects around or terminate them. The federal government can also build on the progress of agencies’ data center closures and eliminating duplicative IT investments. With the possibility of over $5.8 billion in savings from the data center consolidation and PortfolioStat initiatives, agencies should continue to identify consolidation opportunities in both data centers and commodity IT. In addition, better support for the estimates of cost savings associated with the opportunities identified would increase the likelihood that these savings will be achieved. Finally, until OMB and the agencies focus on improving policies and processes governing software licenses, they will likely miss opportunities to reduce costs. Chairman Tester, Ranking Member Portman, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Rebecca Eyler, and Kevin Walsh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government reportedly plans to spend at least $82 billion on IT in fiscal year 2014. Given the scale of such planned outlays and the criticality of many of these systems to the health, economy, and security of the nation, it is important that OMB and federal agencies provide appropriate oversight and transparency into these programs and avoid duplicative investments, whenever possible, to ensure the most efficient use of resources. GAO has previously reported and testified that federal IT projects too frequently fail and incur cost overruns and schedule slippages while contributing little to mission-related outcomes. Numerous best practices and administration initiatives are available for agencies that can help them improve the oversight and management of IT investments. GAO is testifying today on the results and recommendations from selected reports that focused on how federal IT reform efforts could be improved by more effective IT acquisition and more efficient management of existing IT systems. GAO has issued a number of reports on the federal government's efforts to efficiently acquire and manage information technology (IT). While the Office of Management and Budget (OMB) and agencies have taken steps to improve federal IT through a number of initiatives, additional actions are needed. For example, OMB's IT Dashboard provides information, including ratings of risk, on 759 major investments at 27 federal agencies. As of June 2014, according to the Dashboard, 576 investments were low or moderately low risk, 147 were medium risk, and 36 were moderately high or high risk. GAO has issued a series of reports on Dashboard accuracy and identified issues with the accuracy and reliability of cost and schedule data. Furthermore, a recent GAO report found that agencies had removed major investments from the Dashboard, representing a troubling trend toward decreased transparency. GAO also reported that, as of December 2013, the public version of the Dashboard was not updated for 15 of the previous 24 months. GAO made recommendations to ensure that the Dashboard includes all major IT investments and to increase its availability. Agencies generally agreed with the report or had no comments. An additional key reform initiated by OMB emphasizes incremental development in order to reduce investment risk. In 2010 it called for agency investments to deliver functionality every 12 months and since 2012 has required investments to deliver functionality every 6 months. However, GAO recently reported that almost three-quarters of investments reviewed did not plan to deliver capabilities every 6 months and less than half planned to deliver capabilities in 12-month cycles. GAO recommended that OMB develop and issue clearer guidance on incremental development and that selected agencies update and implement their associated policies. Most agencies agreed with GAO recommendations or had no comment. GAO continued to believe that its recommendations were valid. To better manage existing IT systems, OMB launched the PortfolioStat initiative, which, among other things, requires agencies to conduct annual reviews of their IT portfolio and make decisions on eliminating duplication. GAO reported that agencies continued to identify duplicative spending as part of PortfolioStat and that this initiative had the potential to save at least $5.8 billion through fiscal year 2015, but that weaknesses existed in agencies' implementation of the initiative, such as limitations in the Chief Information Officer's authority. Among other things, GAO made several recommendations to improve agencies' implementation of PortfolioStat requirements. OMB partially agreed with GAO's recommendations and responses from 20 of the agencies varied. GAO also recently reported on software license management—one PortfolioStat focus area—and determined that better management was needed to achieve significant savings government-wide. In particular, 22 of the 24 major federal agencies did not have comprehensive license policies. GAO recommended that OMB issue needed guidance to agencies and made more than 130 recommendations to the agencies to improve their policies and practices for managing licenses.OMB disagreed with the need for guidance. However, without it the management of agencies' licenses may be weakened. Most agencies generally agreed with the recommendations or had no comments.
Background Insurance Industry Overview Insurers offer several lines, or types, of insurance to consumers and others, including life, health, annuity, and P/C products. The U.S. life/health and P/C industries reported approximately $583 billion and $481 billion of aggregate net written premiums in 2013, respectively. These insurers have two primary sources of revenue: premiums (from selling insurance products) and investment income. Both life and P/C insurers earn income from premiums they collect but, because of differences in potential claims, their investment strategies generally differ. For instance, life insurance companies typically have longer-term liabilities than P/C insurers, so life insurance companies invest more heavily in longer-term assets, such as high-grade corporate bonds with 30-year maturities. P/C insurers, however, tend to have shorter-term liabilities and tend to invest in a mix of lower-risk, conservative investments such as government and municipal bonds, higher-grade corporate bonds, short-term securities, and cash. The United States is the world’s largest insurance market by premium volume. In 2014, the United States had a total of roughly $1.16 trillion in premium volume. FSB has designated three U.S. insurance groups as G- SIIs and, according to our analysis, nine additional U.S.-based insurance groups generally meet the criteria for becoming IAIGs. Table 1 shows the relative sizes of these companies, based on total assets, compared with other U.S. companies with insurance subsidiaries. Insurance Regulation in the United States Insurers in the United States are regulated primarily by state insurance regulators, but FIO, the Federal Reserve, and FSOC also play roles. State insurance regulators: State insurance regulators are responsible for enforcing state insurance laws and regulations. State regulators license agents, review insurance products and premium rates, and examine insurers’ financial solvency and market conduct. NAIC is the voluntary association of the heads of insurance departments from the 50 states, the District of Columbia, and five U.S. territories. While NAIC does not regulate insurers, according to NAIC officials, it does provide services designed to make certain interactions between insurers and regulators more efficient. According to NAIC, these services include providing detailed insurance data to help regulators analyze insurance sales and practices; maintaining a range of databases useful to regulators; and coordinating regulatory efforts by providing guidance, model laws and regulations, and information-sharing tools. Generally, a model act or law is meant as a guide for subsequent legislation by states. State legislatures may adopt model acts in whole or in part, they may modify them to fit their needs, or they may opt not to adopt them. FIO: FIO was established by the Dodd-Frank Act. Although FIO is not a regulator or supervisor, it has the statutory authority to represent the United States at IAIS, as appropriate, and to coordinate federal efforts and develop federal policy on prudential aspects of international insurance matters. FIO also monitors certain aspects of the insurance industry, including identifying issues or gaps in the regulation of insurers that could contribute to a systemic crisis in the insurance industry or the U.S. financial system. FSOC: FSOC is authorized to determine that a nonbank financial company shall be subject to Federal Reserve supervision and enhanced prudential standards if FSOC determines that the company’s material financial distress—or the nature, scope, size, scale, concentration, interconnectedness, or mix of its activities— could pose a threat to U.S. financial stability. As of March 2015, FSOC had designated American International Group, Inc. (AIG), General Electric Capital Corporation, Inc., MetLife, Inc., and Prudential Financial, Inc. for Federal Reserve supervision and enhanced prudential standards. The Federal Reserve: The Federal Reserve supervises holding companies that may own insurance companies on a consolidated basis if the holding companies are either savings and loan holding companies or nonbank financial companies designated by FSOC for Federal Reserve supervision and enhanced prudential standards. Insurance supervision in the United States is generally at the legal entity level, rather than the holding company or group level, in cases where a company owns one or more insurance companies. That is, state insurance regulators are authorized to supervise individual insurance companies, but lack the legal authority to directly supervise a company that might own an insurer, or to supervise a noninsurance affiliate or any affiliate domiciled and operating outside of the state. However, NAIC officials noted that states that have adopted NAIC’s Insurance Holding Company System Regulatory Act can serve as the group-wide supervisor for an insurance firm that includes noninsurance affiliates, obtain reports and information directly from a noninsurer holding company or affiliate of an insurer, and approve or reject intercompany transactions between the holding company and insurers. State regulators also require insurance companies to maintain specific levels of capital. NAIC’s Risk-Based Capital for Insurers Model Act applies to life and P/C insurance companies. Most U.S. insurance jurisdictions have adopted statutes, regulations, or bulletins that are substantially similar to this model law, according to NAIC, as enactment of this model law is required for a state to be accredited by NAIC. Under this model law, state insurance regulators determine the minimum amount of capital appropriate for a reporting entity (i.e., insurers) to support their overall business operations, taking into consideration their size and risk profile. The model law also provides the thresholds for regulatory intervention when an insurer is financially troubled. Risk-based capital standards aim to require a company with a higher amount of risk to hold a higher amount of capital. Generally, the risk-based capital formulas focus on risk related to (1) assets held by an insurer, (2) insurance policies written by the insurer, and (3) other factors affecting the insurer. A separate risk-based capital formula exists for each of the primary insurance types that focus on the material risks common to each type. For example, risk-based capital for life insurers includes interest rate risk, because of the material risk of losses from changes in interest rate levels on the long-term investments that these insurers generally hold. In addition to capital requirements, regulators use other tools to supervise insurers. For example, supervisory colleges facilitate oversight of IAIGs. U.S. state insurance regulators both participate in and convene supervisory colleges. State insurance commissioners may participate in a supervisory college with other regulators charged with supervision of such insurers or their affiliates, including other state, federal, and international regulatory agencies. Insurers operating in multiple jurisdictions may maintain multiple accounting records in order to satisfy domestic regulatory reporting requirements. U.S. insurers that issue publicly traded securities report financial holdings information to the Securities and Exchange Commission using U.S. Generally Accepted Accounting Principles (GAAP). Additionally, U.S. insurers are required to report their financial holdings on an individual legal entity basis to their state of domicile regulators using Statutory Accounting Principles. Some jurisdictions in the European Union may require insurers to report regulatory requirements using International Financial Reporting Standards, or the valuation system used in Europe’s insurance regulatory rules under Solvency II following its implementation. Finally, some foreign jurisdictions may require reporting with valuation standards in alignment with those developed by the International Accounting standards Board. International Bodies with Roles in the Development of International Standards or Insurance Regulation In response to the 2007-2009 financial crisis, the Group of Twenty (G20) forum—representing 19 countries (including the United States) and the European Union—positioned itself as the main international forum for reforming financial regulations. In 2008, the G20 leaders committed to implementing a broad range of reforms designed to strengthen financial markets and regulatory regimes. To implement their reforms, the G20 leaders generally have called on their national authorities—finance ministries, central banks, and regulators—and international bodies, including FSB and standard-setting bodies such as IAIS. Established by the G20 in 2009, FSB is the international body that coordinates the work of national financial authorities and international standard-setting bodies in the interest of global financial stability. According to FSB, it seeks to support the multilateral agenda for strengthening financial systems and the stability of international financial markets. Its mandate includes reviewing the policy development work of the international standard-setting bodies that jurisdictions use to establish rules or policies through, for example, legislation or regulation. FSB has developed a framework intended to reduce the probably and impact of failure of systemically important financial institutions (SIFI). In 2013 and 2014, FSB also worked with IAIS and national authorities to identify and designate nine insurers as G-SIIs, which will be subject to a set of G-SII policy measures developed by IAIS consistent with FSB’s general SIFI framework. FSB member institutions include national finance ministries, financial regulatory authorities, and central banks, as well as international standard-setting bodies, such as IAIS. U.S. FSB members include the Federal Reserve, the Securities and Exchange Commission, and Department of the Treasury (Treasury)—which serve on the FSB’s Steering Committee and Plenary, FSB’s decision-making body—and other banking regulators (see fig. 1). IAIS is the international standard-setting body responsible for developing and assisting in the implementation of principles, standards, and other supporting material for the supervision of the insurance sector. Established in 1994, IAIS’s mission is to promote effective and globally consistent supervision of the insurance industry in order to develop and maintain fair, safe, and stable insurance markets, and to contribute to global financial stability. It operates by consensus, and its members include insurance supervisors and regulators from more than 200 jurisdictions in approximately 140 countries. According to NAIC, these members account for 97 percent of the world’s insurance premiums. As noted above, FIO has statutory authority to represent the United States at IAIS. In addition, the Federal Reserve and NAIC are also members. IAIS does not have regulatory power or legal authority over its members, but it influences national and regional regulators by publishing supervisory principles, offering training and support, and advancing the latest developments in international regulation. There are four key IAIS bodies involved in the development of international capital standards—the General Meeting, the Executive Committee, the Technical Committee, the Financial Stability Committee— and related subcommittees (see fig. 2). The General Meeting: The General Meeting comprises all IAIS members (of approximately 190, approximately 160 are voting members) who elect members of the Executive Committee and can adopt supervisory and supporting materials. To decide upon an issue, the General Meeting requires either a simple or a two-thirds majority of votes, depending on the issue. For example, with a simple majority the General Meeting can elect members of the Executive Committee, and with a two-thirds majority the General Meeting can vote to adopt supervisory and supporting material not already adopted by the Executive Committee. The Executive Committee: The Executive Committee has 9 to 24 voting members elected by the General Meeting and up to 4 additional nonvoting members (the chairs of certain committees if they are not already voting members). The Executive Committee is responsible for ensuring that supervisory and supporting material to be adopted by IAIS has been adequately vetted by IAIS members and stakeholders; adopting supervisory and supporting material; appointing all other committee chairs and vice chairs; and ensuring that working structures fulfill the IAIS mission. The Technical Committee: The Technical Committee develops international principles, standards, guidance, and other documents related to insurance supervision. Specifically, the Technical Committee is responsible for setting standards in response to developments in industry structures, financial markets, business practices, and policyholders’ needs; completing, reviewing, and updating the comprehensive set of high-level principles-based supervisory and supporting material; and establishing the Common Framework for the Supervision of Internationally Active Insurance Groups (ComFrame), including the Insurance Capital Standard (ICS). Relevant groups reporting to the Technical Committee include the Accounting and Auditing Working Group, the Governance Working Group, the Insurance Groups Working Group, the Resolution Working Group, and the Field Testing Working Group. The Financial Stability Committee: The Financial Stability Committee works on issues related to financial stability, systemic risk, and supervision and surveillance of industry-wide solvency (macroprudential supervision and surveillance). Specifically, the Financial Stability Committee is responsible for developing and refining an assessment methodology to identify G-SIIs; performing an annual assessment of the G-SII status of insurers and reinsurers; developing, in cooperation with Technical Committee, policy measures related to heightened prudential standards for G-SIIs; providing guidance to supervisors and firms on other components of the package of enhanced policy measures that apply to G-SIIs; coordinating IAIS activities with FSB and the G20; preparing papers on issues related to financial stability, systemic risk, and macroprudential surveillance as they relate to insurance; and developing tools to enhance macroprudential surveillance and supervision. Relevant groups reporting to the Financial Stability Committee include the G-SII Analysts Working Group, the G-SII Policy Measures Task Force, and the Capital Development Working Group. International Capital Standards Are Not Yet Finalized, and U.S. Implementation Could Be Challenging International Capital Standards Are in Early Stages of Development and Implementation IAIS is developing three international capital standards for insurers. The standards are the Basic Capital Requirements (BCR), the Higher Loss Absorbency (HLA), and the ICS. The three capital standards serve different purposes and are in different stages of development. 1. BCR: The BCR is a straightforward, basic, risk-based capital requirement that would apply only to G-SIIs and is intended to be used as a globally comparable foundation for the calculation of the HLA requirement. It has three basic components—an insurance component, a banking component that applies the Basel III leverage ratio to regulated banking entities, and a component for noninsurance activities (financial and material nonfinancial)—that are currently not subject to regulatory capital requirements. To set required capital levels, it uses a factor-based approach that applies 15 risk factors to defined segments of traditional life insurance, traditional nonlife insurance, nontraditional insurance, noninsurance, and assets. All holding companies, insurance legal entities, banking legal entities, and any other companies in the group are included in the consolidated capital requirement. In October 2014, IAIS issued and the Financial Stability Board endorsed the finalized BCR. According to IAIS, during 2014 it completed its first round of quantitative field testing, which incorporated the BCR, and beginning in 2015 the BCR was being reported on a confidential basis to group-wide supervisors and shared with IAIS for purposes of additional refining. Once the HLA is complete, the BCR will serve as a foundation for the HLA. The development of the ICS will be informed by the work of the BCR, and when finalized, it will replace the BCR in its role as the foundation for the HLA. 2. HLA: The HLA requirement is a capital add-on that would apply only to G-SIIs to account for their nontraditional, noninsurance activity, as well as other factors that led the Financial Stability Board to designate them as G-SIIs. The sum of the BCR and HLA would form a consolidated group-wide minimum capital requirement that would be higher than the requirement for firms that are not G-SIIs. In September 2014, IAIS issued a set of principles to guide the development of the HLA. For example, one principle states that outcomes should be comparable across jurisdictions. IAIS plans to issue the first HLA consultation document for comment in mid-2015, and plans for implementation to begin in 2019. 3. ICS: The ICS is a risk-based, group-wide capital standard that is intended to apply to all IAIGs and G-SIIs as part of IAIS’s ComFrame. ComFrame is a set of proposed international supervisory requirements focusing on the effective group-wide supervision of IAIGs—as well as related international capital standards. According to IAIS, the ICS is being developed to promote effective and globally consistent supervision of the insurance industry, and should help develop and maintain fair, safe, and stable insurance markets. In March 2015, the IAIS announced that the ultimate goal of the ICS is a common methodology that achieves comparable outcomes across jurisdictions. Once finalized and agreed upon, the ICS would be the minimum standard that IAIS members would be encouraged to implement. According to IAIS, supervisory colleges would identify IAIGs by applying ComFrame criteria. IAIS issued a set of ICS principles in September 2014 and the first of three ICS consultation documents in December 2014. As of April 2015, IAIS is incorporating feedback received on the first consultation document and has said that 2015 priorities include developing an example of a standard method for determining the ICS, further considering approaches to valuation, and defining qualifying capital resources. IAIS is scheduled to conduct its second round of quantitative field testing from April through August 2015, which will be the first time that IAIS will field test the proposed ICS. IAIS is staggering the implementation of the three capital standards, with the full package scheduled for implementation beginning in 2019 (see fig.3). Several Key Aspects of the International Capital Standards Are Largely Unknown Because the capital standards are still in the relatively early stages of development and adoption, several important aspects of the design of the standards are still unknown, as the following examples illustrate: Quantity and quality of capital: Whether the ICS would require U.S. insurers to hold or raise additional capital is largely unknown. Some stakeholders believe that it would be unlikely that the ICS would be higher than what U.S. insurers currently hold. They stated that to maintain desired credit ratings, some U.S. insurers hold more capital than the state risk-based capital standard requires. For example one credit rating agency said that most insurers can sometimes hold approximately 11 times more capital than regulatory requirements mandate. However, it is unknown how the ICS capital requirements would compare with the risk-based capital standard required by state insurance regulators. IAIS has sought stakeholder feedback on the appropriate approach for determining capital standards, which could include a factor-based approach, stress testing approach, or modeling approach. Additionally, it is not yet known whether insurers would be able to include senior debt as qualifying capital, something insurers can currently do in the United States. Some insurers have stated that not being able to include senior debt as capital could potentially result in insurers being considered undercapitalized, requiring them to raise more capital though public offerings or private investment. Finally, some insurers we spoke with raised concerns about group-level capital requirements resulting in a need to hold duplicative capital. For example, one insurer said that foreign jurisdictions would not likely count capital being held elsewhere within a holding company. As a result, insurance holding companies might then have to hold capital at the local entity level, as they do now, and also at the group level for the same risks. Valuation approaches: IAIS has yet to determine which methodologies ICS will use to assess the value of insurers’ assets and liabilities. As of April 2015, IAIS was testing two valuation approaches—a market-adjusted valuation approach where the values of assets and liabilities are adjusted to current market rates, which is not the approach used by most U.S. insurers, as well as a GAAP approach with adjustments. A few U.S. stakeholders stated that using a market-adjusted approach could potentially conflict with the way U.S. GAAP values insurers’ assets and liabilities. According to one insurance association, differences in valuation methods are so substantial that one method may make an insurer appear financially strong while another may make the same insurer appear financially insolvent. Therefore, depending on the valuation methods used, U.S. insurers could be required to raise and hold additional capital. One credit rating agency also said that different countries measure capital and solvency in different ways. This may make it difficult for IAIS to develop a standard that would be accepted internationally. Additionally, one insurance association noted that if the capital standards valuation methodologies do not align with U.S. practices, U.S.-based insurers may need to keep an additional set of regulatory accounting records to demonstrate evidence that they are in compliance with both sets of standards. Risk assessment: Some details about how the ICS will assess risk are still unknown. As previously stated, the BCR uses 15 factors associated with insurance activity and assigns capital charges according to their estimated risks. Some insurers said that the BCR is not granular enough in its classification, or segmentation, of insurance products and business activities. For example, one insurer noted that some of its products that carry different risk characteristics fit into the same category in the BCR, resulting in the possibility of either under- or over-assessing risk charges. In commenting on the draft ICS, which is intended to replace the BCR, some stakeholders said the categories used in the draft ICS were adequate, but others expressed concern about the categories potentially needing to become more granular. Furthermore, one insurer stated that products can carry different risks in different jurisdictions, exacerbating the difficultly of correctly assessing risk with the international capital standards without becoming excessively granular. Other areas of uncertainty include questions of the appropriate time horizon for the development and implementation of the standards and tiering of capital resources, as well as other technical issues. Several stakeholders said that given the significant uncertainties around the design of the standards, IAIS is not allowing sufficient time to develop the capital standards. Federal Reserve officials noted that IAIS aims to identify and address questions and concerns about the standards through its field testing process. Implementation of International Capital Standards in the United States Could Pose Challenges Questions also remain about how the standards would be implemented in the United States, a process that could pose challenges. For example, U.S.-based G-SIIs would be subject to the BCR and HLA. They have also been designated by FSOC for enhanced supervision, meaning that they would be subject to enhanced prudential standards from the Federal Reserve as well. However, because neither the Federal Reserve’s capital requirements for SIFIs nor the HLA had been finalized as of April 2015, it is difficult to know what, if any, challenges the Federal Reserve might face in concurrently implementing these two sets of standards. In addition, one state commissioner noted that it is unclear which regulator would have responsibility for implementing the ICS for the IAIGs. Currently, state regulators supervise individual insurance entities domiciled in their state, but all three of the proposed international capital standards would establish group-level capital requirements. According to NAIC, state regulators would still implement these requirements for insurance groups not supervised by the Federal Reserve. However, a few stakeholders stated that it is unclear who the group-wide regulator would be for these insurers. Furthermore, some stakeholders remarked that having group-level capital standards for U.S. insurers also raises questions about fungibility—the extent to which capital could be moved between entities. Insurers in the United States are not required to move capital among regulated insurance entities, and some stakeholders noted that the extent to which the international capital standards would require group-wide capital to be fungible among legal entities is uncertain. Making capital fungible would raise questions about where group capital would reside (i.e., within the legal entities or in a holding company), and when regulators could move it among entities or across jurisdictions from financially strong entities to aid failing ones. Without a single regulator for the holding company, questions remain about how such decisions would be made. Furthermore, there is some uncertainty about the legal mechanisms that would be used to implement the standards in the United States. NAIC officials said implementation of new capital standards in the United States would likely require individual states to pass legislation that incorporates the standards. They also noted that NAIC could encourage states to adopt such legislation by developing a model law and including it in NAIC’s accreditation requirements. However, one state regulator noted getting all states to adopt and implement it might take time. Representatives of state legislators said that they would work with insurance regulators to determine which requirements were appropriate for their states. Alternatively, NAIC noted the possibility of a federal mandate through Congress to implement the standards. Despite the uncertainties about how the standards would be implemented, several stakeholders we spoke with said that the United States would likely implement the standards once IAIS has completed them. Several stakeholders noted the potential negative consequences of not implementing the standards, such as having international market access issues such as regulatory barriers to conducting business abroad. Two stakeholders also noted the possibility of the United States receiving a poor review from the International Monetary Fund’s Financial Sector Assessment Program or political pressure that could have trade implications between the United States and foreign jurisdictions. Effect of International Group-Level Capital Standards Is Unknown Because the international capital standards are still largely in the early stages of development, it is difficult to determine their potential effects on U.S. insurers. IAIS plans to engage in field testing before finalizing the capital standards but has not yet completed any prospective quantitative or cost-benefit analyses to understand the potential effects on insurers, consumers, or the broader economy. Many stakeholders we spoke with thought it was too early to discuss the likely effects of the standards, but others identified possible effects, including improved comparability, increased costs, and competitive disadvantages. IAIS Intends Capital Standards to Improve Comparability but Some Stakeholders Expressed Concerns IAIS has stated that having a common means to measure capital adequacy on a group-wide consolidated basis could benefit both regulators and insurers. A single international standard would allow regulators to compare IAIGs across jurisdictions. It would also increase mutual understanding among regulators who regulate at the holding company level as well as regulators of nondomestic companies operating in their country, and give them more confidence in cross-border analyses of these companies. Similarly, Federal Reserve officials we spoke with said that international capital standards would limit the possibility for regulatory arbitrage, or “jurisdiction shopping,” where companies choose where they are domiciled based on more favorable standards. One large, internationally active company we spoke with also said that an international capital standard that was comparable across jurisdictions would help regulators address failing companies. International regulators need to agree on a level of capital adequacy and to have comparable means of measuring capital in order to move it across borders to help a failing company. A common standard could make this process easier. For insurers conducting business internationally, a single capital standard could reduce the complexities of complying with many different standards. However, some industry representatives we spoke with said that while a single capital standard could be beneficial to companies that are operating in multiple jurisdictions, they are skeptical that the IAIS standards would create uniformity. Specifically, one insurer noted that IAIS standards would likely not replace local standards, but instead be an additional layer of standards. An industry stakeholder also noted that while it is important for regulators to be able to assess risks faced by IAIGs, it was unclear whether a single capital standard could result in true comparability across national boundaries or different products. As an example, the risk associated with auto insurance in a nonlitigious country that has national health care, where policyholders are less likely to be sued for medical damages, is different from the risk associated with auto insurance in a litigious country that does not have national health care, where policyholders are more likely to be sued for such costs. A U.S. insurance industry group commented that applying the same capital standard to companies from different regulatory environments with different economic and political goals would not produce comparable conclusions about capital or solvency. It noted, for example, that the U.S. regulatory system was based on an economic and political model that supported relatively easy entry into and exit from the market, with policyholder protection as the primary goal. In contrast, it said that capital standards being developed by the European Union may place more emphasis on protecting creditors and investors. Some stakeholders have suggested that IAIS should focus on comparability of outcomes rather than developing a single capital formula. For example, several stakeholders who commented on the draft ICS suggested a stress testing approach that could help identify common risks and better ensure that all IAIGs could survive certain prescribed stress scenarios without prescribing a specific capital requirement. Stakeholders Expressed Concern That Standards Could Result in Increased Costs and Competitive Disadvantages In the face of limited empirical work conducted to date and the lack of details on final capital requirements, there is considerable uncertainty about the effect on insurance companies, the industry, consumers, and the larger macro economy. However, if insurance companies were forced to raise capital or hold more capital it could carry costs. Consistent with this, several stakeholders we spoke with said that if the capital standards required insurers to hold more capital, they could result in higher costs for insurers, and therefore higher prices for consumers. In addition, requiring insurers to hold more capital could be costly for all affected insurers but could also disproportionately affect some types of insurers. For example, two insurers we spoke with said that mutual insurance companies cannot raise equity easily because they do not issue stock and are owned by policyholders rather than shareholders and use senior debt as part of their qualifying capital. If the ICS does not recognize such unique features of mutual insurance companies, then mutual insurance companies could be significantly disadvantaged in the marketplace with lower qualifying capital as a result of the ICS. These companies would likely have to increase capital by taking actions such as selling assets, increasing retained earnings, and shifting their portfolio toward lower-risk assets. Some insurers we spoke with also said that increased capital requirements could result in opportunity costs. That is, having to hold more capital could result in these resources not being invested to generate higher financial returns, invest in product innovation, or expand to other markets. Many stakeholders we spoke with also said that the standards could result in increased compliance costs. In particular, depending on the accounting method that is used, the standards could result in insurers having to maintain an additional set of accounting records for recordkeeping purposes. Many insurers and state regulators we spoke with said that any increase in costs for insurers could translate into higher prices and fewer product offerings for consumers. Specifically, several insurers and state regulators noted that if the IAIS standards imposed higher costs on IAIGs through higher capital requirements or compliance costs, IAIGs would have to raise product prices to offset these additional costs. As a result, the companies would be less competitive with both large domestic insurers and smaller insurers, neither of which would be affected by the international capital standards. However, the extent to which costs might be passed on to consumers would depend on the degree of competition. NAIC suggests that U.S. markets are quite competitive, which may limit the degree to which insurance companies can raise prices on insurance products. A few stakeholders said that as an alternative or in addition to raising prices, IAIGs may choose to discontinue some products. As an example, some stakeholders said this would most likely affect longer-term insurance products such as annuities. Because long-term liabilities could be considered riskier than short-term liabilities, they can carry higher capital charges. Consumers could have fewer product choices and the markets for some products could become less competitive. One insurer we spoke with stated that the uncertainty regarding the international capital standards was already affecting its decisions on products offered and pricing. Further, two stakeholders told us that if insurers raise prices or discontinue products in response to international capital standards, they may also change the types of assets they hold to match with their liabilities. Specifically, if the capital standards result in disincentives to offer certain long-term products, insurers may no longer purchase the same amount or type of long-term assets to match the liabilities of these products. This, in turn could affect the markets for those long-term assets. For example, insurers are significant investors in some long-term assets, such as corporate and municipal bonds, and a reduction in insurers’ purchases of these bonds could potentially result in disruptions to these markets. Another important factor is how shareholders and other investors may react to additional capital holdings and the possibility of lower return on equity. Some stakeholders said that holding additional capital could benefit insurers if investors viewed the more stringent regulations as a sign of increased financial resilience and reliability. Credit rating agencies we spoke to said that while being subject to the standards would not necessarily increase insurers’ ratings, being subject to international capital standards could be a positive signal to investors. However, some stakeholders expressed concern that increased costs resulting from higher capital requirements could reduce shareholder returns, making insurers a less attractive investment opportunity. Companies can also adjust to heightened capital requirements by shifting to lower risk assets; shrinking the volume of business; merging with other insurance companies to diversify business lines; or relying more on reinsurance. Views Differ on the Need for the International Group- Level Standards Stakeholders Disagreed on the Systemic Risk Posed by Insurance Activities In its support for the development of an international capital standard for insurers, the Financial Stability Board has cited supporting financial stability as a reason the standards are needed. However, views differed on whether and to what extent insurance companies pose a risk to financial stability—and therefore whether the standards were actually needed. According to many of the stakeholders we spoke with, traditional insurance activities would likely not pose systemic risk or threaten financial stability, but engaging in nontraditional, noninsurance activities could create such risks. These views were supported by a number of the studies we reviewed. For example, IAIS has noted that the bulk of traditional insurance risks are idiosyncratic—that is, they tend not to be correlated with each other or with the economic business cycle and financial market developments, which decreases their likelihood of contributing to systemic risk. However, IAIS also stated that substantial nontraditional and noninsurance activities have the potential to make insurers more prone to posing systemic risk, and may contribute to making insurance groups systemically important. While there is no single agreed-upon definition of nontraditional, noninsurance activities, some research has described these activities as more bank-like in nature and provided examples that included third-party asset management, investment banking, and hedge fund and credit default swap activities. Insurance activities can also have nontraditional features that may increase their systemic risk. For example, if variable annuities contain guaranteed returns, attempting to pay guaranteed amounts could result in increased asset sales by an insurer and exacerbate already distressed market conditions. One study we reviewed noted that capital infusions were needed for several large insurers during the last financial crisis because some insurers’ investment-oriented life insurance policies had minimum guarantees and other contract features. We found that the three U.S.-based G-SIIs all offered variable annuities with guaranteed benefits and guaranteed investment contracts. Also, four of the seven additional U.S-based companies that generally met the criteria for being an IAIG offered variable annuities with guaranteed benefits, and two offered guaranteed investment contracts. Nontraditional, noninsurance activities may also increase the likelihood that an insurance company will contribute to systemic risk because they can increase the company’s interconnectedness with the broader financial sector. Most of the insurance industry representatives we spoke with, as well as some of the literature we reviewed, noted that insurers generally did not have interconnections with broader financial markets that would pose systemic risk. However some stakeholders said that nontraditional, noninsurance activities could increase a company’s interconnections. IAIS has stated that the systemic importance of such activities increases as the activities expand the company’s interconnectedness with noninsurance financial sectors. For example, IAIS noted that through its credit default swaps, securities lending, and other noninsurance activities, AIG was connected with many large commercial banks, investment banks, and other financial institutions. As a result, the U.S. government concluded that without assistance, AIG’s failure could have caused cascading losses throughout the financial system. We have also pointed out the impact of AIG’s activities in prior work. Literature we reviewed also noted that increasing involvement of insurance companies in nontraditional, noninsurance activities was also increasing the interactions between insurance and banking, a development that has implications for financial stability. For example, one study concluded that in recent years, insurance companies had increased their involvement in alternative risk transfer instruments, such as insurance-linked securities, and that this activity had increased interconnectedness between insurers and banks. Another study found that interconnectedness between insurers and the financial system has increased over time, largely because of life insurers and insurers specializing in financial guarantees. The extent of systemic risk posed by nontraditional, noninsurance activities likely depends on how an insurance company is managing the risks associated with the activities. For example, guaranteed investment contracts may create a systemic problem if an insurer is unable to manage liquidity demands created by higher than expected policyholder withdrawals. This could happen if, for example, interest rates rise sharply above what policyholders are receiving on their accounts, motivating them to move their money to the higher yields. However, maximum contribution and withdrawal rates, as well as penalties, can be written into the contracts to mitigate this risk. In addition, insurers may engage in hedging to mitigate this risk. They could, for example, enter into interest rate derivatives contracts, whose values rise when interest rates rise. In its most recent Financial Sector Assessment Program report on the U.S. insurance sector, IMF noted that the insurance industry has made improvements in the management of exposures created by guarantees, including using certain hedging strategies. However, the report also noted that the effectiveness of these strategies during times of market turmoil remains uncertain. In addition to nontraditional, noninsurance activities, some stakeholders have noted that even traditional features of insurance products can lead to systemic risk. For example, while some industry representatives we spoke with said that insurance products were not subject to bank-like runs that could lead to systemic failure, others have said that life insurers could be subject to such runs. In its justification for designating one insurance company for Federal Reserve supervision and enhanced prudential standards, FSOC wrote that the company could experience significant asset liquidations as the result of surrender and withdrawal requests, and that such liquidations could cause significant disruption to key markets. Dissenting views on FSOC’s designation of this company noted that there are protections and disincentives that mitigate the risk of these runs on insurance companies, such as the ability of an insurance company to delay payment of early withdrawals and charge surrender fees, as well as the ability of state insurance regulators to manage significant policyholder surrender activity. FSOC wrote that although the company has the contractual ability to defer payouts on withdrawable liabilities and thus to reduce the need for asset liquidations, if this action were to be taken at a time when the company was experiencing material financial distress, it could spread concern about the company. Such concern could exacerbate the company’s material financial distress and result in negative effects for counterparties, policyholders, and the broader industry. In addition, customers could become concerned about access to funds at other insurance companies with similar assets or product profiles, especially in the context of a period of overall stress in the financial services industry and in a weak macroeconomic environment. Other factors that could contribute to systemic risk include the size of the company and the nature of its assets. Many experts and industry representatives we spoke with said that insurers do not pose systemic risk based on size alone, but some have said that size is an important factor. For example, a study analyzing insurers’ contribution to systemic risk during the economic crisis showed that insurers’ size, as measured by total assets and net revenues, was a significant factor. The study noted that failures of large insurers could lead to doubts about the health of other insurers, potentially destabilizing the financial system. While insurers are connected to other counterparties through their assets, a few insurance industry representatives noted that their connections were not sufficiently large to pose systemic risk. However, one study we reviewed found that insurers’ assets were distributed across a wide range of financial sectors, including corporate bonds, stocks, government bonds, and commercial mortgages. The study asserts that the failure of an insurance company and the subsequent unwinding of its assets could trigger asset fire sales and pose a threat to the financial system. IAIS has stated that it considers its capital standards to be essential for supporting international financial stability. Specifically, the HLA is designed to address notable risk posed by G-SII’s nontraditional, noninsurance activity and interconnectedness. As previously noted, the HLA is a capital add-on applied to the BCR, and while the standard is still in the early stages of development, it is intended to be a capital charge specifically for G-SIIs. In addition, according to IAIS, the ICS will reflect all material risks of IAIGs, including noninsurance risks. While IAIS has not yet specified how noninsurance risk will be accounted for in the ICS, the first ICS consultation draft states that the capital treatment of noninsurance financial activities will be expanded upon in future consultation processes. Also, group-wide capital standards could be used to help noninsurance affiliates of insurance groups in times of financial distress by moving capital from insurance entities to the affiliates. However as previously noted, uncertainties remain about when and how capital could be moved from one entity to another and how such transfers would be impacted by state insurance laws in the United States. IAIS has also stated that the capital standards for G-SIIs would reduce the probability and impact of any failure of these companies and thus reduce the expected systemic impacts of disorderly failure. In addition, according to IAIS these standards should be disincentives for other insurers to become systemically important and therefore designated as G-SIIs. While, according to IAIS the main objective of the ICS is policyholder protection and financial stability, some industry representatives and state regulators we spoke with said that the current U.S. regulatory system was sufficient for its purpose—protecting policyholders—and that additional international capital standards were not needed. Specifically, once state insurance regulators determine an insurer is having solvency issues, they work with the insurer to resolve the issue, and if necessary can appoint a receiver and attempt to rehabilitate the company. If that fails, the state guaranty funds, which pay for covered claims in the event of an insurance company insolvency, help ensure that policyholders are protected. The funds do not prevent an insurance company from failing, but according to NAIC, they generally help policyholders to receive their claims quickly and help to ensure the stability of the insurance market. Some stakeholders have noted that this focus on protecting policyholders made the U.S. regulatory approach different from other international systems that may be more focused on preventing insurance company failures. These stakeholders said this system is appropriate for the United States and should not be changed by the introduction of international capital standards. For example, in commenting on the ICS consultation draft, officials from one internationally active insurance company noted that the ICS should be focused on requiring that insurance companies have the capital necessary to meet policyholder obligations and not on protecting other creditors. One researcher we spoke with, however, has expressed concern that state guaranty funds would not be sufficient to protect all policyholders if a large insurance company were to fail. In addition, in its most recent Financial Sector Assessment Program report, IMF noted that the overall soundness of the U.S. insurance sector cannot be adequately assessed without group-level capital requirements. In addition, some insurance industry representatives and state regulators we spoke with said that insurance companies generally fared well during the last economic crisis, suggesting that the current regulatory structure and current capital standards were sufficient for helping insurance companies withstand downturns. Although insurance companies received various sources of assistance during the recent financial crisis, including direct capital support and liquidity support, IAIS has reported that, in general, the insurance business model enabled the majority of insurers to withstand the last financial crisis better than other financial institutions. In our June 2013 report on this issue, we also found that the effects of the financial crisis on insurers and policyholders were generally limited. However, we found that some life insurers that offered variable annuities with guaranteed living benefits, as well as financial and mortgage guaranty insurers, were more affected by their exposures to the distressed equity and mortgage markets. We and others have also pointed to AIG as an example of an insurance group that suffered large losses and threatened broader financial stability, but some state regulators and industry representatives we spoke with noted that AIG’s distress could not have been avoided even by having higher capital requirements in place. They also noted that the supervisory colleges that are now in place would have detected risky activities involving AIG’s insurance companies. Some Stakeholders Said That Tools Other Than Capital Requirements Also Help Reduce Risk Many industry representatives and U.S. federal and state regulators we spoke with discussed the importance of tools other than capital requirements in monitoring and mitigating potential risk posed by insurance companies. FIO, the Federal Reserve, and NAIC described capital requirements as one of many tools available to regulators. Other tools that both regulators and industry representatives cited included the following: Supervisory colleges: These joint meetings of all regulators involved in supervising a company can include detailed discussions about a company’s financial data, corporate governance, and enterprise risk management functions. According to NAIC, supervisory colleges, which generally started after the last financial crisis, are intended to facilitate oversight of internationally active insurance companies at the group level. Regulators and industry representatives we spoke with said that while supervisory colleges were a newer practice, they had shown promise in helping regulators detect and manage risk at the group level. One state regulator that had hosted supervisory colleges for two large insurance groups said the colleges were taken very seriously and serve as an opportunity for open and candid conversations about a company and its strategic objectives. Officials from another state regulator said the colleges had enhanced communications among international regulators and had proven to be effective in identifying group-wide risks. In addition, he said that the colleges had allowed regulators from various countries to better understand each other’s practices and processes. Own Risk and Solvency Assessments: An Own Risk and Solvency Assessments is an internal process undertaken by an insurer or insurance group to assess the adequacy of its risk management and current and prospective solvency positions under normal and severe stress scenarios. Large- and medium-size U.S. insurance groups were required to begin regularly conducting Own Risk and Solvency Assessments starting in 2015. One insurance company representative said that Own Risk and Solvency Assessments had helped the company maintain an awareness of all of the risks they have undertaken. State insurance regulators we spoke with also told us that Own Risk and Solvency Assessments would be an important regulatory tool for them. Enterprise Risk Management tools: Insurance companies engage in enterprise risk management practices to obtain an enterprise-wide view of their risks and help management engage in risk-based decision making. Enterprise risk management generally has two goals: (1) to identify, evaluate, and quantify risks, and (2) to ensure that the organization actively implements risk treatment strategies and manages appropriate risk levels. Examples of specific enterprise risk management practices include identification and categorization of risks, well-defined risk tolerances, risk mitigation with cost-benefit analyses, and stress tests and other modeling approaches. FIO representatives and some state regulators we spoke with said that enterprise risk management practices were important to companies’ risk management. Supervisory colleges, Own Risk and Solvency Assessments, and enterprise risk management are practices that work together to reduce risk. For example, enterprise risk management practices can be used to conduct Own Risk and Solvency Assessments, and one insurer noted that supervisory colleges can help ensure that a company’s enterprise risk management practices are properly identifying and mitigating risks. In its comments to IAIS on the ICS consultation draft, one insurance company suggested that the ICS should be made compatible with other regulatory tools such as Own Risk and Solvency Assessments and supervisory colleges. The comments stated that a supervisory college would likely voluntarily take action if a consolidated assessment of an insurance group revealed solvency concerns. The comments also noted that IAIS should develop standards and processes for supervisors to use in conducting such an assessment and not simply create a prescriptive formula for capital requirements. Collaboration among U.S. IAIS Members Has Improved, but Opportunities to Enhance Collaboration Exist U.S. IAIS members actively participated in the development of the new international capital standards for insurers at IAIS and in related U.S. collaborative efforts, which incorporated some leading practices for collaboration but not others. They led and participated in key IAIS committees and voted in the General Meeting. Collaboration among U.S. IAIS members has improved, and U.S. IAIS members and industry stakeholders were generally optimistic. We found that while the U.S. collaborative efforts were consistent with certain leading practices that we have identified, the U.S. IAIS members have opportunities to take additional steps in line with leading practices to enhance and sustain those efforts. U.S. industry stakeholder participation in the development of international capital standards has evolved, and occurs through IAIS, the U.S. collaborative efforts, and through individual agency efforts. U.S. IAIS Members Participate in Key IAIS Committees and Vote in the IAIS General Meeting, and Other U.S. Stakeholders Have Limited Influence As of March 2015, the U.S. IAIS members are NAIC, FIO, and the Federal Reserve. NAIC was a founding member of IAIS in 1994. FIO became a member in 2011, after the Dodd-Frank Act created the office and gave it a range of authorities, including coordinating on international insurance matters, and representing the United States in IAIS, as appropriate. The Federal Reserve became a member in 2013, after FSOC designated some insurers for enhanced supervision by the Federal Reserve. The U.S. IAIS members have played an active role in the development of the international capital standards by participating in the IAIS General Meeting, voting in the Executive Committee, leading and participating in other relevant IAIS committees and subcommittees, and providing comments on IAIS consultation drafts (see fig. 4), as illustrated by the following: General Meeting: The U.S. IAIS members have 17 of approximately 160 votes in the General Meeting. Through this mechanism, U.S. IAIS members can help elect members of the Executive Committee, and adopt supervisory material developed by IAIS that has not already been adopted by the Executive Committee. Executive Committee: As of March 2015, the U.S. IAIS members had 3 of 24 votes in the Executive Committee: one NAIC member as the co-vice chair, another NAIC member as a voting member, and FIO. Through this mechanism, the U.S. IAIS members (1) help ensure that any supervisory and supporting materials to be adopted by IAIS have been subject to an adequate consultation process among IAIS members and stakeholders, (2) adopt supervisory and supporting material developed by IAIS unless such a decision is deferred to the General Meeting, and (3) appoint Chairs and Vice Chairs for other committees. Other relevant committees: The U.S. IAIS members have leadership roles on six relevant IAIS committees, subcommittees, working groups, and task forces and membership in four additional relevant groups. For example, one NAIC member is the vice chair of the Financial Stability Committee and FIO is the chair of the Technical Committee. Through these mechanisms, U.S. IAIS members can contribute to reaching international consensus through discussions on issues related to financial stability, systemic risk, and macroprudential supervision and surveillance and development of international principles, standards, guidance, and other documents related to insurance supervision. These discussions generally are led by committee chairs and influence the development of consultation documents before the documents are made public. Comments on IAIS consultation documents: In addition to participating in discussions on issues related to international capital standards through work in relevant committees, the NAIC provides additional input to the development of the standards by submitting comments on IAIS consultation documents. For example, the NAIC has submitted comments on the BCR and ICS consultation documents. The U.S. IAIS members told us that their work through these mechanisms contributed significantly to the development of the IAIS international capital standards. U.S. IAIS members who chaired committees and subcommittees had the authority to set timelines for reaching decisions on issues and then decide when to bring up an issue for formal decision, and U.S. IAIS members who were members of committees and subcommittees also contributed to consensus building. According to the U.S. IAIS members and several industry stakeholders, their efforts also had a significant effect on specific topical areas. For example, they told us that the U.S. IAIS members were largely responsible for developing the international consensus that led IAIS to include a GAAP-adjusted valuation approach in the ICS consultation document and 2015 field testing. IAIS originally intended for the ICS to use the market-adjusted valuation approach, which many U.S. stakeholders have said is incompatible with U.S. accounting standards, but the U.S. IAIS members said that they had made a successful coordinated effort for the ICS to include the GAAP-adjusted valuation approach as a second approach. FIO also said that setting the final scaling factor in the BCR formula, which can increase or decrease the overall BCR results, was another area where U.S.IAIS members had worked with international counterparts to shape an appropriate consensus. Collaborative Efforts among U.S. IAIS Members Have Improved, but Are Not in Line with Some Leading Practices U.S. IAIS members and many industry stakeholders we interviewed indicated that initially, after FIO and the Federal Reserve joined NAIC as U.S. IAIS members, U.S. IAIS members did not collaborate effectively or speak with a unified voice on international capital standards for insurers. Given their different authorities, the U.S. IAIS members each had different focuses and perspectives related to international capital standards. For example, in 2015, the International Monetary Fund reported that state insurance regulators and the Federal Reserve had different focuses and potential conflicts between their mandates regarding group-wide supervision. The International Monetary Fund found that the state insurance regulators focused on policyholder protection, while the Federal Reserve focused on depositor protection. Furthermore, FIO and the Federal Reserve were new to IAIS and said that they did not have official policies guiding their work in IAIS or in collaboration with other U.S. IAIS members on international capital standards. U.S. IAIS members and stakeholders pointed to areas of public disagreement between FIO and NAIC on issues such as FIO’s potential role in supervisory colleges, the general need for the ICS, and which insurance products would count as nontraditional, noninsurance—a major factor in the formula used to determine which insurers would be designated as G-SIIs. Also, one industry association said that FIO and NAIC running against each other for the chair of the IAIS Technical Committee in the fall of 2012 detracted from the U.S.’s ability to speak with a united voice. Most insurers and an industry association we interviewed indicated that the lack of a unified U.S. view initially reduced U.S. influence in the IAIS, and some insurers indicated that this was one factor that enabled foreign regulators to strongly influence initial IAIS work on capital standards. Early on, the U.S. IAIS members took some steps to coordinate their positions on various substantive and procedural issues at IAIS through information sharing. FIO officials said that, as authorized in the Dodd- Frank Act, FIO began to coordinate the U.S. IAIS members by organizing regular phone calls among high-level U.S. officials to discuss key items for IAIS meetings and among staff on technical issues. The Federal Reserve and the NAIC also took steps that helped coordinate their work with the other U.S. IAIS members. For example, Federal Reserve officials told us that they continued sharing information with state regulators on SIFIs that were also designated as G-SIIs with state regulators, and NAIC officials said that FIO and the Federal Reserve responded to some NAIC invitations to participate in NAIC meetings that addressed international capital standards, such as NAIC conference calls leading up to IAIS meetings, relevant parts of NAIC national meetings, and NAIC ComFrame Development and Analysis Working Group calls. Additionally, all three U.S. IAIS members said that their staff members had long been in regular, informal communication regarding international capital standards. U.S. IAIS members strengthened their focus on collaboration as IAIS activity related to international capital standards increased, and our analysis of information they provided shows that their collaborative efforts are in line with some but not all leading practices for implementing and sustaining interagency collaborative efforts. In our September 2012 report on interagency collaboration, we found that it is difficult to sustain collaborative efforts on issues that touch upon the responsibilities of multiple agencies and identified leading practices for implementing and sustaining interagency collaborative efforts. Although collaborative mechanisms differ in complexity and scope, they all benefit from certain key features that agencies should consider when implementing and sustaining these mechanisms, such as leadership, outcomes and accountability, and participants, among other things. Appendix II contains more information on the key features we have identified in past reports. Leadership The U.S. IAIS members have taken some steps that are in line with leading practices related to leadership and collaborative efforts. But they have not yet taken steps to help ensure that leadership will be sustained over the long term, consistent with leading practices that we have previously identified. Officials from FIO, the Federal Reserve, and NAIC cited the following examples: The members developed an informal leadership structure and decision-making process for their collaborative efforts, with FIO coordinating the efforts. They established an unofficial steering committee of high-level officials from FIO and the Federal Reserve, and three state insurance commissioners appointed by NAIC. The steering committee provides general leadership and organizes monthly teleconferences to discuss agenda items for upcoming IAIS meetings and coordinates issues and strategies across work streams. Members also established an informal decision-making process in which they aim for consensus while accepting that they might not always reach it. We have previously said that agreeing on roles and responsibilities helps agencies organize joint and individual efforts, and facilitates decision making. The members identified and agreed on four key technical areas of the ICS to work on—segmentation, valuation, capital requirements, and capital resources—and created work streams around them. Although the work streams do not have official chairs, leadership has been shared among the U.S. IAIS members. FIO has taken the lead in the segmentation work stream and the valuation work stream, with the Federal Reserve leading technical work on a data template and instructions. The Federal Reserve has taken the lead in the capital requirements work stream, with FIO and NAIC leading work in specific risk categories. Leadership of the capital resources work stream is more evenly distributed among members, with FIO serving as the overall coordinator. We have previously found that distributing leadership responsibility for certain group activities among members can help keep members engaged. High-level staff from each member provided strong leadership to the efforts by actively participating in regular meetings, often in person. We have previously said that committed leadership at all levels of an organization is needed to overcome the many barriers to working across agency boundaries. In addition, we have previously found that interagency groups benefit from involving high-level leaders who could help recruit key participants and make policy-related decisions requiring a high level of authority. The U.S. IAIS members have not yet taken steps to help ensure that leadership will be sustained over the long term. The tenure of high- level officials who participate in the U.S. collaborative efforts— especially the state insurance commissioners who are political appointees—is not guaranteed through the scheduled start of implementation of the ICS and ComFrame in 2019. Federal Reserve and FIO officials said that personal commitment from high-level officials contributed to the effort’s success to date, and said that a change in leadership would be a setback for the effort. Federal Reserve officials also said that there was a need to strategically consider how to sustain the collaborative efforts, and said that they were in the process of filling five related positions and were beginning to consider succession planning. FIO officials said that they have not yet considered succession planning, but they have been trying to establish a precedent for collaboration. NAIC officials agreed that it was important to sustain leadership in the collaborative efforts, noting that they tried to select commissioners who were better able to make long-term commitments for participation in the collaborative effort. NAIC officials also said that because there was external pressure to remain involved in IAIS and that interest in the process was not likely to subside, signing a memorandum of understanding or similar document would likely not significantly increase agency commitment. As we have previously said, given the importance of leadership to any collaborative effort, transitions and inconsistent leadership can weaken its effectiveness. Consequently, it is important for participating agencies to consider how leadership will be sustained over the long term. Outcomes and Accountability The U.S. IAIS members have taken steps in line with leading practices related to outcomes and accountability for collaborative efforts, but could take additional steps to improve organizational accountability, such as addressing the work done in U.S. collaborative efforts in agency annual reports, as shown in the following examples: To establish shared goals that resonate with all participants, the members took steps, such as starting the collaborative effort with the most directly affected participants and then broadening it to include other stakeholders, as well as identifying shared interests early. The members agreed to aim to establish consensus on each of the four technical issues mentioned earlier, and then build on work in these areas to establish a more unified U.S. view on the ICS. We have previously identified that these approaches are effective ways to help define outcomes that represent the collective interests of all participants and gain support in achieving the objectives of the collaboration. We have also identified that establishing such goals provides agencies with a reason to continue participating in the process. The Federal Reserve and FIO, through Treasury, have goals in their strategic plans that are compatible with those of the collaborative efforts. For example, the Federal Reserve’s 2012-2015 Strategic Framework has a strategic objective related to strengthening the stability of the financial sector through the development of policies, tools, and standards. Also, Treasury’s 2014-2017 Strategic Plan has strategic objectives related to implementing financial regulatory reform initiatives, addressing threats to financial stability, and advancing U.S. interests through multilateral mechanisms. We have previously identified that federal agencies can use their strategic plans to reinforce accountability for the collaboration by aligning agency goals and strategies with those of the collaborative efforts. Federal Reserve and FIO annual reports have not yet addressed the work done in U.S. collaborative efforts. While Federal Reserve and FIO officials both mentioned IAIS in their most recent annual reports, the annual reports do not discuss their agencies’ actions in IAIS or in the collaborative efforts. NAIC’s most recent annual report discusses NAIC’s actions in IAIS and mentions ongoing discussions with FIO and the Federal Reserve regarding group capital. One high-level FIO official said FIO would consider including discussions of progress made in the collaborative efforts in future annual reports, but as of 2014, the reports have generally been focused on the status and progress of broader efforts. Federal Reserve officials also said that there could be an opportunity to report on the collaborative efforts in future annual reports. NAIC officials said that details of collaborative efforts were discussed in NAIC national meetings. We have previously identified that reporting publicly on collaborative efforts can strengthen participating agencies’ commitment to working collaboratively by reinforcing accountability through public reporting of results so that efforts can be tracked and monitored. There are several reasons why the collaborative efforts do not include some leading practices that we had identified as being key to implementing and sustaining interagency collaborative efforts. Primarily, U.S. IAIS members and industry stakeholders mentioned: this is the first time that the federal agencies and state regulators had worked together on international insurance matters in IAIS, and the United States has never before had a supervisory standard for group capital for insurers; U.S. activity surrounding the capital standards was still in its early stages and had increased only recently; U.S. IAIS members are not statutorily required to collaborate with each other, and are sorting through ideological challenges related to the integration of federal authorities in U.S. insurance regulation; and the responsibility for the implementation and enforcement of the proposed standards would be split among many regulators. While U.S. IAIS members and most U.S. insurers and insurance associations we interviewed were optimistic about the recent collaborative efforts, some industry stakeholders said that it was too soon to tell whether the efforts would be effective. U.S. IAIS members and most industry stakeholders we interviewed generally agreed that recent collaborative efforts improved upon past coordination and helped create a more unified U.S. view on the ICS, and also improved engagement with U.S. industry stakeholders. Additionally, U.S. IAIS members and some industry stakeholders said that the collaborative efforts had generated new ideas, such as the GAAP-adjusted valuation approach, which NAIC officials said met the needs of all U.S. parties. FIO also said that collaboration had improved over time as the participants learned through experience how to best coordinate and share analysis, information, and views. However, some stakeholders said that the effectiveness of the collaborative effort remained unproven because it had yet to achieve its long-term goal of establishing a more unified U.S. view on the ICS, and that it was unclear whether the U.S. IAIS members would be able to agree on related issues, such as insurance group capital standards. Another stakeholder noted that they would like the collaborative efforts to increase interaction with U.S. G-SIIs because such interaction would encourage greater transparency and generate specific, technical feedback from G-SIIs that was necessary to develop and implement the international capital standards. Following additional leading practices related to leadership and outcomes and accountability could help U.S. IAIS members enhance and sustain their collaborative efforts. Although U.S. IAIS members have different authorities and are not required to collaborate in IAIS, they have said that establishing a more unified U.S. view on the ICS is important because doing so would allow the U.S. IAIS members to better contribute to IAIS discussions on capital standards. Because the U.S. IAIS members have yet to meet this goal and will need to collaborate until at least 2019, the scheduled date for IAIS to pass and ask countries to begin to implement the ICS, it is important to ensure that collaborative efforts are effective and can be sustained. Additional steps taken now to enhance and sustain collaboration, while the development of international capital standards is in the relatively early stages, could help U.S. IAIS members better advocate for standards that reflect the interests of U.S. insurance regulators, industry, and consumers over the long term. U.S. Industry Stakeholder Participation in Development of International Capital Standards Has Evolved U.S. industry stakeholders provided some direct input to IAIS on the development of international capital standards. According to FIO, eight insurance companies served as field testers for proposed standards. Also, most of the insurers and insurance industry associations we interviewed submitted comments on consultation documents to IAIS, and two had also submitted relevant research for IAIS consideration. While IAIS received comments representing a diversity of views, some noted that IAIS did not always incorporate the comments they submitted via these mechanisms. Additionally, although attendance at IAIS committee meetings was often open only to IAIS members, under earlier IAIS policy many industry stakeholders held observer status, which allowed them to pay annual membership fees in order to participate in select IAIS meetings but not to vote. For example, we observed that IAIS meetings preceding the 2014 IAIS annual conference included a session where five sets of observers gave presentations on the proposed structure and nature of the ICS, and a dialogue between observers and members of the IAIS Technical and Financial Stability Committees subsequently occurred. However, IAIS recently changed its policies for stakeholder consultation and meeting attendance. IAIS issued related consultation documents, solicited stakeholder comments, and voted to pass and implement the policies between July 2014 and January 2015. The new stakeholder consultation policy eliminated observer status but established public consultation sessions with stakeholders on the development of all supervisory and supporting material, public sessions with the Executive Committee, public dialogues and/or hearings, and timely public information on IAIS activities. IAIS has taken steps that demonstrate how it may implement the new stakeholder consultation policy. For example, beginning in February 2015, IAIS started a series of six meetings to discuss ComFrame and capital standard development. We observed that the first meeting offered stakeholders the opportunity to provide comments on the ICS consultation document and ask related questions to members of the IAIS Capital Development and Field Testing Working Groups, who said that feedback would be incorporated into the 2015 round of field testing. as appropriate. Under the new policy for meeting attendance, committee or subcommittee chairs could invite guests to closed meetings when there was a specifically identifiable need for input in order to provide additional perspective or input into matters being developed at the committee/subcommittee levels and help ensure that all relevant substantive views are being considered. While not enough time has passed to assess the effects of changes to IAIS policies for stakeholder consultation and meeting attendance, IAIS, U.S. IAIS members, and U.S. industry stakeholders we interviewed had mixed views on the changes. IAIS said that these changes would make the process of obtaining stakeholder input more effective, efficient, consistent, transparent, and predictable. The Federal Reserve said that the new policies would make the IAIS rulemaking process more transparent and help IAIS be fully independent of the entities it regulates. FIO said that the new policies would promote IAIS efficiency, independence, and transparency. NAIC voted against the new policies, and said that they would decrease IAIS transparency and make it more difficult for IAIS to achieve optimum regulatory outcomes or reach broad consensus on the standards. NAIC noted that those most affected by the standards—the industry and consumers—would not be able to provide as much input as before. U.S. industry stakeholders we interviewed generally expressed negative opinions on the new policies. Specifically, they were often concerned that the new policies could decrease the transparency of the IAIS capital standard development process and that by the time IAIS allowed them to provide input, it would be too late to make a difference because the decisions would have effectively been made. However, one G-SII said that the new policy for stakeholder engagement was appropriate, reflected key alterations sought by both U.S. industry stakeholders and U.S. IAIS members, and encouraged significant interaction with regulatory standard setters. U.S. industry stakeholders have also been involved in the U.S. collaborative efforts, providing input that informs U.S. IAIS members’ efforts related to the development of international capital standards for insurers in IAIS. For example, agency officials told us the following: U.S. IAIS members worked with eight U.S. insurers who were IAIS field testers. Specifically, they communicated with the field testers and their primary regulators on their experience testing the proposed international capital standards and reviewed the data the field testers planned to submit to IAIS. Officials said that this effort helped them verify that the data were of good quality and understand U.S. data before holding related discussions with foreign regulators in IAIS committees. U.S. IAIS members have involved industry stakeholders with technical expertise in the four work streams mentioned earlier, and incorporated some of the industry stakeholder feedback in their work. U.S. IAIS members held four meetings to discuss the results of field testing and technical issues related to the ICS consultation draft with IAIS field testers and other insurers in August 2014, October 2014, January 2015, and February 2015. The last three meetings included additional industry stakeholders, such as large domestic-only insurers and insurers with foreign parent companies. Federal Reserve officials said that they were reviewing comments and considering them as they developed their own position on the ICS in areas such as field testing specifications and potential changes in approach. Additionally, according to Federal Reserve officials, the U.S. IAIS members are planning to hold meetings on similar topics in the near future. Although U.S. IAIS members disagreed on whether the FSOC independent member with insurance expertise would be a relevant participant in U.S. collaborative efforts, U.S. IAIS members agreed that FIO had generally involved the right industry stakeholders. We have previously identified that ensuring that the relevant participants have been included in the collaborative effort, including organizations from the private sector, is important. In addition to providing direct input to IAIS and being involved in the U.S. collaborative efforts, U.S. industry stakeholders have also discussed the development of international capital standards with U.S. IAIS members through other mechanisms, described in the following examples: FIO discussed international capital standards with the Federal Advisory Committee on Insurance (FACI), a committee that Treasury created to provide advice and recommendations that assist FIO in carrying out its statutory authority, and has members including a range of industry representatives. For example, according to FACI documents from September 2013 and August 2014, FIO provided FACI with high-level explanations of objectives for international capital standards and issues related to implementation, and responded to questions from FACI members. In November 2014, FIO gave a presentation to FACI that provided additional information on topics including U.S.-specific activities related to the international capital standards, such as field testing and collaborative efforts to develop a unified U.S. view. Federal Reserve officials told us that they had accepted numerous requests for informal meetings with insurers on how international policies could potentially affect them. NAIC committees that address international capital standards for insurers—such as the International Insurance Relations Committee and ComFrame Development and Analysis Working Group—held open meetings, through which industry stakeholders could learn about NAIC’s work at IAIS and provide both conceptual and technical input. For example, from 2013 through 2015, the NAIC groups held open conference calls and meetings on issues such as IAIS observer and stakeholder meetings; designation of G-SIIs and development of G-SII policy measures; the role of capital in ComFrame; IAIS work related to the development of the BCR, HLA, and ICS; and NAIC comments for submission to IAIS on the BCR and ICS consultation documents. Conclusions IAIS is in the early stages of developing international capital standards for insurers, and key decisions still need to be made. The development process will continue until at least 2019 and could affect large, internationally active U.S. insurers. Effective long-term collaboration among U.S. interests in this process is essential to ensuring a reasonable outcome for the U.S. insurance industry and its regulators. The international standards are being developed in a large multilateral forum in which many national regulators advocate for standards that will align with their national interests. In this multilateral setting, the U.S. members could better advance U.S. interests and concerns with a more unified voice. Given that U.S. IAIS members have different authorities and areas of focus, they may not be likely to reach similar positions without effective coordination. Further, because the development process will span at least 4 more years, a unified U.S. presence with sustained leadership is essential. Recently, the U.S. IAIS members have increased their focus on collaborating with each other and with U.S. stakeholders, and are aiming to establish a more unified U.S. view on the ICS. Engaging in leading collaboration practices, such as sustaining long-term leadership and developing better public reporting of its efforts, would help U.S. IAIS members enhance their efforts and better advocate for the interests of U.S. insurance regulators, industry, and consumers. Recommendation To enhance and sustain future U.S. participation in the development of international capital standards for insurers, the Secretary of the Treasury should direct the Director of FIO, in consultation with the Federal Reserve and NAIC, to enhance future collaborative interagency efforts by following additional leading practices for collaboration, such as taking steps to sustain leadership over the long term and publicly reporting on their efforts, for example in annual reports. Agency Comments and Our Response We provided a draft of this report to FIO, the Federal Reserve, FSOC, the Office of the U.S. Trade Representative, and NAIC for review and comment. FIO concurred with our recommendation, and its written comments are reprinted in appendix III. The Federal Reserve, FSOC, the U.S. Trade Representative, and NAIC provided us with technical comments, which we incorporated as appropriate. In concurring with our recommendation that FIO enhance future collaborative efforts by following additional leading practices, FIO said that the agency would build on its existing collaboration process by following leading collaboration practices discussed in the report. Further, FIO said that it would discuss U.S. IAIS members’ collaboration in FIO’s annual report. Finally, it noted that the office would take steps to sustain U.S. leadership at IAIS over the long term. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Federal Insurance Office, the Chair of the Board of Governors of the Federal Reserve System, the Secretary of the Treasury as the Chairperson of FSOC, the U.S. Trade Representative, and the President of NAIC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology To examine the development and potential effects of these international capital standards for U.S. insurers, we reviewed (1) the status of the development and implementation of the international standards; (2) what is known about the potential effects of applying international capital standards to U.S. insurers; (3) industry and other stakeholder views on the need for an international group-level capital standard for insurance companies; and (4) the extent to which U.S. regulators are collaborating with each other, and considering the views of industry and other stakeholders, in developing a U.S. position on international capital standards. To address all of these objectives, we interviewed insurance industry stakeholders, reviewed IAIS documentation, and attended relevant meetings and conferences. Specifically, we interviewed federal agencies—the Federal Insurance Office, the Board of Governors of the Federal Reserve System, and the Financial Stability Oversight Council (FSOC)—as well as the National Association of Insurance Commissioners (NAIC) and several former and current state insurance regulators that will likely supervise internationally active insurance groups (IAIG). We spoke with the offices of the former insurance commissioner from Connecticut, as well as the offices of the current insurance commissioners from New Jersey, New York, Nebraska, and Pennsylvania. Additionally, we spoke with the current insurance commissioner from Missouri because he was a member of FSOC. We also interviewed representatives of the International Association of Insurance Supervisors (IAIS), credit rating agencies, the American Academy of Actuaries, and the National Conference of Insurance Legislators. In addition, we interviewed representatives of all three U.S.-based insurance groups that have been designated as global systemically important insurers (G-SII). Using the IAIS criteria for identifying IAIGs and SNL Financial data, which we determined to be reliable for these purposes by reviewing related documentation and conducting electronic testing of the data, we identified the U.S.-based companies that would likely meet the criteria and interviewed three of these companies. We also interviewed two non-U.S.-based companies that would likely be IAIGs; two large, internationally active U.S.-based insurers that would not likely meet the criteria for being IAIGs; as well as a large U.S.-based company that is not internationally active but had participated in the U.S. collaborative efforts. We selected companies to include both property/casualty and life insurers, as well as those that were participating as field testers for the international capital standards. We also interviewed two insurance industry associations—the Property Casualty Insurers Association of America and the American Council of Life Insurers. Additionally, to obtain their views on the international capital standards, we interviewed regulators from two other countries and insurance industry associations from three other countries that had recently implemented similar types of capital standards for insurers, had a large presence of U.S.-based insurers, and had either a G-SII or potential IAIG domiciled in the country. We also reviewed relevant documentation related to standards, including consultation drafts of the standards, stakeholder comments on the draft standards, and IAIS documentation, such as on financial stability and identifying G-SIIs. Finally, we attended the 2014 IAIS annual meeting in Amsterdam, an IAIS stakeholder meeting in Los Angeles, as well as three NAIC meetings related to the development of the standards. To examine the need for and potential effects of the international capital standards, we conducted a literature review of 38 studies that reviewed systemic risk or international capital standards, identified through online databases such as ProQuest and EconLit. We created a standardized template to capture information from each study. We used the studies as testimonial evidence regarding differing viewpoints on the need for and the potential effect of enhanced capital standards, and we reviewed their methodologies to ensure that they were sufficiently reliable for these purposes. We also interviewed two academics who have studied these issues. In addition, we analyzed data from SNL Financial to determine the number of G-SIIs and IAIGs that were offering variable annuities with guaranteed benefits, and guaranteed investment contracts. We determined the data to be reliable for these purposes by reviewing related documentation and conducting electronic testing of the data. To assess the extent to which U.S. regulators are collaborating with each other and industry stakeholders in developing a U.S. position on the standard, we reviewed past GAO reports that establish criteria for effective collaboration. We also reviewed agency documentation, such as strategic plans and annual reports, to better understand the extent to which the agencies were meeting these criteria. We also spoke with officials at the U.S. Trade Representative about its potential involvement in the implementation of the standards. The officials clarified that the U.S. Trade Representative would not be involved in the implementation of international capital standards for insurers. We conducted this performance audit from July 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Features and Issues to Consider When Implementing Collaborative Mechanisms Given agencies’ long-standing challenges working across organizational lines, in 2005 we identified the following practices that can help enhance and sustain collaboration among federal agencies, including: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate develop mechanisms to monitor, evaluate, and report on results; reinforce agency accountability for collaborative efforts through agency plans and reports; and reinforce individual accountability for collaborative efforts through performance management systems. In 2012, we built on our past work and developed key issues for Congress and others to consider when implementing interagency mechanisms that the federal government uses to collaborate. These key issues and features are listed in table 1, below. Appendix III: Comments from the Federal Insurance Office Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Lawrance L. Evans, Jr., (202) 512-8678, evansl@gao.gov. Staff Acknowledgments In addition to the contact named above, Patrick A. Ward (Assistant Director), Winnie Tsen (Analyst-in-Charge), Jordan Anderson, Nancy Barry, Bethany Benitez, Chloe Brown, Emily Chalmers, David Dornisch, Janet Eakloff, Courtney LaFountain, Scott McNulty, Joseph Silvestri, Jena Y. Sinkfeld, Sarah Veale, and Jack Wang made significant contributions to this report.
Large, internationally active insurance companies accounted for 28 percent of the aggregate insurance premiums underwritten in the United States in 2014. IAIS is developing international group-level capital standards for these insurers. Although these standards are not yet complete and U.S. regulators have not yet determined how they might be implemented, some regulators and insurers have expressed concerns. GAO was asked to review the potential effects of the standards, the need for them, and U.S. involvement in their development. This report examines (1) the status of the development and implementation of the international standards; (2) what is known about their potential effects; (3) views on the need for the standards; and (4) the extent to which U.S. regulators are collaborating in developing a U.S. position on the standards. To address these questions, GAO reviewed IAIS and U.S. agency documentation and relevant literature; assessed the extent of collaboration compared to leading practices; and interviewed regulators, IAIS officials, insurers, academics, and other stakeholders that would be affected by or have commented on the standards. International capital standards establishing the amounts of capital that large, internationally active insurers could be required to maintain are in the early stages of development, and much about them remains uncertain. For example, the International Association of Insurance Supervisors (IAIS) has not finalized the methodologies that will be used to determine the required capital levels. Further, implementing the standards at the group level in the United States could be challenging since states, the primary regulators, focus on individual insurance entities rather than on group-level entities or holding companies. At this time, it is unclear which U.S. regulator would implement and enforce the standards or how they would compare with current U.S. capital standards. With so many unknowns, some stakeholders agreed that it was too early to determine the effects of the proposed standards. However, some stakeholders said that any effects could be minimal, since U.S. insurers generally hold high levels of capital. Other stakeholders said that potential positive effects could include the promotion of comparable standards across jurisdictions and the removal of incentives for companies to select locations based on regulatory differences. Some stakeholders also mentioned potential negative effects, including higher costs for insurers required to hold additional capital that could create incentives to stop offering some products or to raise prices. Stakeholders expressed mixed views on the need for international capital standards to address systemic risk. Many stakeholders said that traditional insurance activities were not likely to pose systemic risk, which has been described as a key reason for pursuing the standards. But other stakeholders said that nontraditional noninsurance activities, such as credit default swaps and guaranteed investment contracts, could increase insurers' interconnectedness with other financial market participants and cause systemic effects should an insurer fail. These types of activities contributed to financial problems for the American International Group, Inc. during the 2007-2009 financial crisis. IAIS officials and others said that international capital standards could help address risks from these activities. But some state regulators and industry representatives noted that current U.S. risk-based capital standards and other regulatory tools adequately protected U.S. policyholders and that regulators were coordinating to address potential group-wide risks. The U.S. members of IAIS—including the Federal Insurance Office (FIO), the Federal Reserve, and the National Association of Insurance Commissioners (NAIC)— have improved coordination among themselves as a group but could do more to incorporate leading practices for collaboration. GAO found that the collaborative efforts members had made were consistent with some leading practices, such as establishing shared goals. But U.S. IAIS members have not followed other leading practices, such as ensuring that leadership will be sustained in the long term and publicly reporting on their collaborative efforts. The members said that their efforts were still in the early stages. Adopting these practices would allow U.S. IAIS members to better advocate for standards that reflect the interests of U.S. insurance regulators, industry, and consumers.
Background The CRAF program was created in 1951 and its importance was reaffirmed by the National Airlift Policy in 1987. The National Airlift Policy states that the military will rely on the commercial air carrier industry to provide the airlift capability required beyond that available in the military airlift fleet. Additionally, the policy includes nine guidelines to meet airlift requirements in peacetime and wartime. These guidelines direct that policies be designed to increase participation in CRAF; enhance the mobilization base of the U.S. commercial air carrier industry; provide a framework for dialogue and cooperation with commercial promote the development of technologically advanced transport aircraft and related equipment. According to DOD officials, these guidelines serve as the objectives of the CRAF program. CRAF commitments are divided into three levels or stages—Stages I, II, and III—depending on the size of the operations or contingency in which DOD is involved. As defined in the CRAF contract between DOD and its commercial partners, Stage I activation supports expanded operations beyond DOD’s routine daily operations and provides the equivalent of 30 passenger and 30 cargo aircraft; Stage II activation is used in the event of a major regional contingency and supporting mobilization and provides the equivalent of 87 passenger aircraft and 75 cargo aircraft; and Stage III activation supports two major regional contingencies and provides the equivalent of 136 passenger aircraft and 120 cargo aircraft. When CRAF is activated, carriers have a specified time frame in which to provide aircraft, with pilots and crews, to DOD. Once activated, air carriers continue to operate and maintain the aircraft with their resources; however, AMC controls the aircraft missions. The majority of DOD passenger flights require carrier flexibility, as many DOD missions are not routine in their locations or timing. Charter passenger carriers fly the majority of DOD peacetime, contingency, and Stage I business because charter passenger carriers’ businesses are designed with the flexibility to provide airlift based on the customer’s (DOD’s) schedule. Scheduled passenger carriers operate commercial flights on regular routes and can ill afford unplanned disruptions to their airline networks. However, because of their large fleet sizes, the scheduled carriers are a critical component of the CRAF fleet. As an incentive to encourage participation in CRAF, DOD contracts exclusively with CRAF participants to fly its daily, peacetime passenger and cargo airlift business and any surge for contingencies. As articulated in an August 2008 DOD-sponsored CRAF study, carriers earn the entitlement to DOD’s peacetime business through points awarded based on their aircraft commitments to each CRAF stage. According to the study, the greater the commitment by the carrier, the greater the amount of peacetime business to which a CRAF participant is entitled. These points become the basis of a carrier’s entitlement to compete for the procurement of peacetime passenger and cargo airlift business. To maximize the value of these entitlements, CRAF participants have formed into three teaming arrangements, which are created and managed by the participants themselves. These teams comprise of a mix of passenger and cargo carriers that join together to pool their entitlements to DOD business; that is, the entitlement directly associated with a carrier’s individual commitment is combined with the entitlements earned by other carriers on their team. DOD assigns peacetime business to the team based on the team’s total entitlement and availability, not to the individual carrier. Once that business is assigned to the team, the team leader, or administrator, is responsible for accepting and distributing the business to the carriers at their discretion. DOD’s 2008 National Defense Strategy requires the military to assess, mitigate, and respond to risk that could potentially damage national security. Identifying and managing risk is also an important goal of all successful internal control programs. Internal controls include the organization, policies, and procedures used by agencies to reasonably ensure that, among other things, critical programs like CRAF achieve their intended results effectively and efficiently. Internal control standards require that management should provide for an assessment of the risks the agency faces from both external and internal sources. These standards also require that there be control activities—that is, the policies, procedures, techniques, and mechanisms that enforce management’s directives—in place to help insure that actions are taken to address risk. DOD Has Not Assessed the Risks That Changes in Charter Passenger Capabilities and DOD’s Outsized Cargo Needs Might Have on the CRAF Program Charter Passenger Capability Has Declined Although DOD depends heavily on CRAF charter passenger capability, this capability has declined substantially over the past 5 years and DOD has not established the risk that this decline may have for the CRAF program. DOD depends on the charter passenger industry to move more than 90 percent of its peacetime requirements, as well as all contingency surges. While the charter passenger capability has, historically, satisfied DOD’s requirements, there has been a 60 percent decline in this capability since 2003 due mainly to a declining demand for charter airlines in the commercial sector. Figure I shows this decline in CRAF participants’ charter passenger aircraft from a high of 66 aircraft in 2004 to 29 in 2008. Additionally, the figure shows that, even as commercial passenger carriers’ revenues from DOD peacetime business increased to historic levels after 2001, and the amount of business available to charter passenger carriers was higher than it had ever been, the charter aircraft capacity continued to decline. This decline in charter passenger capability led to a finding in the August 2008 DOD-sponsored CRAF viability study that this capability may become marginal for unexpected peacetime and contingency requirements. However, the study did not reflect that, in April 2008, CRAF’s largest charter passenger carrier ceased operations due to bankruptcy. The sudden loss of 16 charter passenger airplanes from the CRAF program left DOD with only 3 charter passenger carriers and 19 total charter passenger aircraft until May 2008, when another passenger carrier dropped its scheduled services and committed 20 charter aircraft to CRAF. However, according to industry officials and confirmed by DOD, the sudden reduction in charter aircraft after the April 2008 bankruptcy led to a situation in which the return home of some redeploying Maine National Guard troops in Iraq was delayed by about a week. Because of a limited charter aircraft capability, the Commander, U.S. Transportation Command, personally called CRAF scheduled carriers and asked them to free up aircraft to transport these troops back to the United States. The bankruptcy of DOD’s largest charter passenger carrier without notice demonstrates the volatility of the charter passenger industry and raises questions about the industry’s ability to continue to meet DOD requirements without a CRAF activation involving the larger, scheduled carriers to satisfy the requirements the charter passenger industry was filling. There is little or no excess capacity among scheduled carriers. The five scheduled passenger carriers we spoke with told us that, due to market conditions and shrinking fleets that have been tailored to meet their commercial demands more efficiently, scheduled carriers are reluctant to commit aircraft to peacetime operations, contingencies, and CRAF Stage I beyond a small, required contribution. According to airline and industry officials, pulling a single aircraft out of a scheduled passenger carrier’s daily planned service can cause major disruptions to its routes; therefore, to support any stage of CRAF activation, scheduled air carriers depend on a decrease in their commercial demands, similar to the reductions seen after September 11, 2001, that would make aircraft available. If the charter passenger industry business continues to decline, DOD will likely be forced to turn to scheduled air carriers to fulfill daily and Stage I requirements currently met by the charter carriers. However, given the scheduled carriers’ smaller fleets, DOD has not quantified the number of charter passenger aircraft it may need on a daily basis and in contingencies and Stage I, or the risk of having a smaller charter passenger capability to handle these requirements. DOD officials have told us that they have no concerns that sufficient CRAF participants will respond to a call for airlift, whether during peacetime or in an activation. DOD’s Need to Move Outsized Cargo Has Increased Since 2005, DOD’s need to move outsized cargo to support peacetime and contingency operations has increased with the acquisition of more than 15,000 MRAP vehicles. Because there are no U.S. commercial cargo aircraft capable of moving outsized cargo such as MRAP vehicles into Iraq and Afghanistan, DOD is using foreign-owned carriers to support such movements to supplement its military airlift capability. As of April 2009, DOD had moved a total of 3,890 MRAPs by air, of which almost 80 percent were moved using foreign-owned carriers flying large Antonov-124 aircraft. We found and DOD confirmed that in the 2005 Mobility Capabilities Study, DOD planned for U.S. commercial cargo carriers participating in the CRAF program to move only bulk cargo, and did not identify a need for these carriers to move outsized cargo; however, without some supplemental capability—such as the use of foreign-owned carriers—the need for DOD to move outsized cargo into areas of crisis, and have that cargo arrive in a timely manner, could limit DOD’s ability to meet future airlift requirements. According to DOD analysts involved in the ongoing Mobility Capabilities and Requirements Study—2016, DOD will again plan for CRAF cargo participants to carry only bulk cargo. As DOD moves additional troops and equipment into land-locked Afghanistan and in similar scenarios in the future, the need to airlift MRAPs and other large equipment, like helicopters, may continue to need the use of commercial carriers to assist military airlift. However, it is not clear whether foreign-owned carriers would be able or willing to fly in certain scenarios. For example, DOD officials acknowledged that foreign- owned carriers, for security reasons, would not likely be used during a CRAF activation. Moreover, we believe the use of foreign-owned companies in support of U.S. military operations could be problematic if or when foreign-owned carriers find supporting a U.S. contingency to be inconsistent with their national interests. For example, in 2008, when the U.S. Transportation Command was using Russian-based carriers to fly outsized cargo to Iraq, Afghanistan, and other locations, U.S. military aircraft ferried Georgian troops from Iraq back to Georgia in anticipation of a potential confrontation with Russian troops. We believe that risk may be increased in such scenarios in the future. Without further analysis of DOD’s options for meeting its outsized cargo needs, including the potential role of commercial carriers, the inability of DOD to meet its needs to move outsized cargo into areas of crisis and have that cargo arrive in a timely manner could increase risk for DOD operations. However, DOD officials told us that DOD is using the foreign-owned aircraft only to ease the high stress on military aircraft and because such use is less expensive than military aircraft, not because there is an insufficient number of military aircraft available to fly this outsized cargo. DOD Is Not Fully Aware of How Changes in Its Charter Passenger Airlift Capabilities and DOD’s Outsized Cargo Needs Affected CRAF Because It Has Not Conducted Risk Assessments DOD is not fully aware of the extent to which these changes may have affected the CRAF program’s ability to meet DOD’s future transportation requirements because DOD has not conducted risk assessments as described in the 2008 National Defense Strategy. In this strategy, DOD defines risk to the national defense in terms of the potential for damage to national security combined with the probability of occurrence and a measurement of the consequences should the underlying risk remain unaddressed. This strategy also states that DOD must account for future challenges and their associated risks to meet the objective of winning our nation’s wars and describes the need to assess, mitigate, and respond to risk in the execution of defense programs critical to national security. In the case of the CRAF program, risk assessments can be used to determine if there are any gaps, shortfalls, or redundancies in the charter passenger or outsized cargo segments that could prevent DOD from meeting future airlift requirements. The most recent DOD sponsored CRAF study, issued in August 2008, predicted that passenger charter capability may become marginal, but the capabilities reviewed in the study did not include the further declines in this capability that occurred in 2008. The study also did not quantify the risk associated with the passenger charter capability decline that has already occurred. In accordance with both GAO and DOD management internal controls, a risk assessment could inform program managers by establishing the maximum and minimum acceptable risk for the CRAF program. For example, it could identify the numbers of charter passenger aircraft necessary to meet DOD requirements. Without a risk assessment, DOD will continue to be uncertain what the levels of required CRAF charter participation is necessary to fulfill requirements, and DOD and industry decisions makers will not be able to begin to take steps to address the risks. Furthermore, according to DOD officials, DOD has not conducted a risk assessment that examines outsized cargo movement, including the use of commercial air carriers to supplement its military fleet, and identifies any consequences of relying on foreign owned carriers to meet peacetime and contingency needs. As previously stated, DOD is using foreign-owned carriers to move MRAPs and other outsized equipment to Afghanistan and Iraq. However, the 2005 Mobility Capabilities Study predates the acquisition of more than 15,000 outsized MRAPs. Additionally, the August 2008 CRAF study did not assess any CRAF outsized cargo movement. A risk assessment could determine whether a gap, shortfall, or redundancy exists in relation to the U.S. commercial and military outsized cargo capability. In addition, a risk assessment could provide information to decision makers regarding the possibility of potential damage to national security from the reliance on foreign-owned carriers and the probability of such damage in future contingencies. Without such a risk assessment, DOD may not know the most effective method for transporting outsized cargo, and if any methods present potential risk to national security. DOD Has Not Issued Policies That Would Strengthen Management of the CRAF Program DOD’s management of the CRAF program has not provided CRAF air carrier participants with a clear understanding of some critical areas of the program, which could strengthen the program’s effectiveness and the ability to support its objectives. Although management internal controls such as clearly articulated policies can help meet program objectives, DOD has not developed policies related to four of the CRAF program objectives as outlined in the National Airlift Policy. These four objectives include: enhancing the mobilization base, promoting aircraft modernization, increasing air carrier participation, and providing a framework for dialogue and cooperation with commercial air carriers. As outlined by both GAO and DOD, management internal controls help provide reasonable assurance that, through effective management, programs can achieve their objectives. According to these management internal controls, one way to help assure that a program’s objectives are met is to establish clearly articulated policies. Policies, a form of management control, are, according to U.S. Transportation Command, intended to provide guidance and procedures to carry out operations or achieve objectives. However, we found that CRAF business partners do not have a clear understanding of important aspects of the CRAF program because DOD lacks policies in critical areas of the CRAF program that could help DOD meet its program’s objectives. U.S. Transportation Command officials have stated that the CRAF contract with the carriers serves as policy. However, the contract does not contain some elemental items of policy including objectives, goals, and measures of effectiveness as outlined in GAO and DOD management internal controls. DOD Has Not Developed Policies Related to Four CRAF Program Objectives DOD Has Not Developed Policy Concerning 60/40 Rule Enforcement DOD has not developed policy regarding the enforcement of its business rules, such as the 60/40 rule, that would help strengthen the CRAF mobilization base. More than 40 years ago, DOD established measures to ensure that CRAF air carriers had both commercial and DOD revenue streams. These measures evolved into what is now known as the 60/40 rule, a rule defined in the CRAF solicitation allowing that no CRAF carrier should collect more than 40 percent of its revenues from DOD business. Carriers that earn more than 40 percent of their revenue from DOD may be penalized by reductions in their entitlement to DOD business. The original goals of the rule were to ensure that CRAF carriers maintained a strong business base, efficient operations, and modern fleets, all of which would prevent carriers from going out of business when DOD demands were low. The rule would also provide DOD with a surge capability to draw on if demand grew suddenly. Although DOD created the 60/40 rule with these intended goals, several CRAF carriers told us that they are unaware of the intent of the rule today because they are not sure if they have to follow the rule, or if it is even being enforced. Some CRAF carriers have broken the 60/40 rule by depending in large part on DOD for their revenue. However, because there is no written DOD policy describing the rule and its enforcement, no carrier could tell us when, or under what conditions, the rule is actually enforced. According to airline officials, this lack of guidance affects carriers’ business plans because they are not sure whether to account for 60/40 rule compliance when determining the size their fleets. Unclear enforcement parameters also make it difficult to plan lease or purchase of planes or how many to acquire. Three DOD-sponsored CRAF studies completed in the last 3 years have all given differing recommendations regarding the 60/40 rule, adding to the ambiguity as to whether or not the rule is or will be in effect. Additionally, it is unclear whether or not CRAF objectives of participation and meeting DOD surge demands are being met. Without policy that clearly states the guidelines and objectives of the 60/40 rule, CRAF carriers may not be able to properly size their fleets to meet DOD demands, and have the capacity for DOD to draw on to meet demands, which may decrease the mobilization base of the CRAF program. DOD Has Not Developed Policy Concerning CRAF Fleet Modernization DOD has not developed policies that promote CRAF fleet modernization, although DOD officials have recognized the need for a more modern CRAF fleet. The National Airlift Policy directs that policies be created to promote the development of technologically advanced transport aircraft in order to ensure a commercial airlift capability. In addition, a December 2007 DOD- sponsored CRAF study acknowledged the importance of modernization and recommended that DOD develop policies to encourage CRAF carriers to modernize their existing fleets. Moreover, DOD officials have recognized the necessity of a modernized commercial air fleet by repeatedly testifying before Congress about its importance for the continued viability of the program. However, DOD has not provided CRAF participants with policies that include guidelines, objectives, or economic incentives that would encourage modernization. Because the charter passenger industry plays such a large role in moving DOD passengers, we believe it is in DOD’s interest to ensure the commercial airlines have guidelines and incentives, such as a rate structure that would pay more for carriers to fly newer airplanes, to assist in modernizing their fleets. Two of DOD’s largest remaining charter passenger carriers are flying large numbers of aircraft listed on the Federal Aviation Administration’s Aging Aircraft List. As the December 2007 DOD-sponsored CRAF study warned, these planes will soon be retired as the costs of inspections, maintenance, and life- extension work becomes prohibitive. Since these aircraft are being used to fly DOD business almost exclusively, charter passenger carriers told us that they look to DOD to provide guidance and incentives to modernize. DOD officials told us that they cannot influence modernization or force the carriers to modernize. However, without DOD policy that provides specific modernization guidelines, CRAF carriers may not see a reason or have a business case to take steps needed to modernize their aircraft. DOD Has Not Developed Policy Concerning Oversight of Distribution of Peacetime Business DOD has not developed policies regarding the oversight of distribution of its peacetime airlift business, which may negatively affect CRAF air carrier participation and may affect DOD’s ability to manage the CRAF program effectively. DOD’s incentive system of contracting with CRAF participants to fly its daily peacetime business is intended to meet the program objective of increasing air carrier participation in CRAF by providing each CRAF participant with a reasonable share of peacetime business. DOD policy that includes guidance, instructions, regulations, procedures, or rules that clarify the CRAF incentive system and some oversight of the distribution of peacetime business would give CRAF carriers a clearer understanding of this important process. According to DOD officials, the process and procedures for distributing DOD’s peacetime airlift business have not been described in policy and are not overseen by DOD. As discussed earlier, DOD awards individual carriers points based on the number and type of aircraft they commit to CRAF. These points become the basis of a carrier’s entitlement to compete for the procurement of peacetime passenger and cargo airlift business. To maximize these points, the carriers have formed themselves into three teams that have their own agreements on how the business will be distributed among the team members. U.S. Transportation Command officials confirmed that they distribute peacetime business to the teams and have no further involvement in how the teams distribute peacetime business among the members. The officials also said that they consider the existing system to be adequate in meeting program objectives. In the absence of DOD oversight and control, some of the CRAF carriers have expressed concerns that peacetime business distribution is not transparent and can be inequitable. Some CRAF participants have told us that teams distribute DOD peacetime contracts disproportionate to an individual air carrier’s CRAF commitment. These carriers also told us that the result is that some CRAF participants receive less DOD business than their entitlement reflects. Some CRAF carriers told us that the execution of the incentive system discourages participation and, in some instances, could cause carriers to go out of business. We understand that U.S. Transportation Command and Air Mobility Command have no involvement with the formation of the teams or the agreements teams have reached with their members. However, without DOD policies and oversight over the final distribution of the peacetime business that flows from the incentive system established by DOD, DOD cannot be sure that this system is accomplishing its goal of enhancing carrier participation in CRAF. DOD Has Not Developed Policy Concerning DOD and CRAF Carrier Communication DOD has not developed policy that establishes a framework for dialogue and cooperation with commercial air carriers that would invite CRAF participants to comment on pending program decisions and facilitate sharing information with them. Although facilitating an effective partnership between DOD and commercial carriers is a stated objective of the CRAF program, airline officials stated that DOD has not involved CRAF participants in some important program decisions that have had significant impact on the participants’ business plans. For example, DOD announced a policy change that decreased the amount of money carriers were reimbursed for fuel, which is allowed under the CRAF contract. Carriers told us that they factor in fuel reimbursements in their yearly business plans, and are not prepared to adjust to a significant pricing change during the middle of a year, especially when they had no knowledge of the change ahead of its implementation and thus could not plan in advance for the effects. In addition, until recently, DOD had not shared information from DOD-sponsored studies on the CRAF program with CRAF carriers. For example, of the four DOD-sponsored CRAF studies completed in the past 6 years, most carriers we talked to told us they had neither seen nor heard of the studies until late 2008. Of the CRAF carriers we interviewed, only one carrier reported receiving a copy of any DOD-sponsored CRAF study. DOD officials have said that mechanisms are in place for effective CRAF communication between DOD and CRAF carriers, such as using trade associations to perform what DOD officials describe as an “industrial reality check” and holding industry days and conferences. However, according to some carriers, communication through trade associations is not sufficient because some carriers are not allowed a voice in meetings, and some carriers are not members of the associations at all. Several carriers stated that DOD has little communication with them beyond using trade associations and annual meetings. With a clearly described policy that establishes a framework for an effective partnership fostering communication, DOD could strengthen its management of the CRAF program and enhance its relationship with the carriers, thus ensuring continued participation in CRAF. Conclusions Given the importance of CRAF in moving passengers and cargo for DOD to support peacetime and contingency operations and major operations requiring CRAF activation, it is critical for the CRAF program to be able to meet DOD’s future needs. By policy, statute, and contract, DOD depends on CRAF business partners that increasingly find themselves in a challenging business environment. If the charter passenger industry continues to decline, DOD could increasingly turn to scheduled air carriers to fulfill the daily and Stage I requirements that are currently being met by the charter carriers; however, the scheduled carriers may not be willing or able to fly these missions and meet DOD’s airlift needs. Additionally, the potential absence of sufficient outsized cargo capability could potentially jeopardize national security by preventing DOD from accomplishing its missions due to an inability to move outsized cargo into areas of crisis within the time frames the commanders need it to arrive. Until risk assessments are conducted and actions are taken to mitigate any risks that are identified, DOD and industry decision makers will not be fully informed about risks in the CRAF charter passenger segment and in outsized cargo capability that could prevent CRAF from meeting DOD’s airlift requirements. Moreover, the lack of appropriate policies that address critical areas of the CRAF program hinders DOD’s ability to meet the objectives of the program. Until DOD develops policies that provide commercial air carriers with a clear understanding of critical aspects of the CRAF program, such as enforcement of business rules (such as the 60/40 rule), specific modernization guidelines, distribution of peacetime business, and a framework for communication, thus strengthening its management of the program, DOD cannot provide reasonable assurance that the CRAF program will meet its primary objective of providing critical airlift to support DOD operations. Recommendations for Executive Action To assist DOD with management of the CRAF program, we are making the following two recommendations for executive action. First, to help DOD identify and analyze risks associated with achieving program objectives, we recommend that the Secretary of Defense direct the Commander, U.S. Transportation Command to Conduct risk assessments as outlined in DOD’s National Defense evaluate the declining U.S. charter passenger capability by establishing the maximum and minimum acceptable risk for the CRAF program expressed in terms of numbers of charter passenger aircraft necessary to meet DOD requirements; and evaluate the lack of an outsized cargo capability to supplement military capability and the extent to which the reliance on foreign owned carriers is appropriate; Develop appropriate policies and procedures for mitigating any identified risks. Second, to strengthen the effectiveness of the critical partnership between DOD and the U.S. commercial air carrier industry and the management of the CRAF program to achieve its objectives, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics (Transportation Policy) to develop policy that establishes enforcement guidelines for the basic CRAF business rules, to include intent, objectives, and measures of effectiveness mechanisms; establishes incentives, objectives and measures of effectiveness required to ensure modernization of the CRAF fleets; establishes and describes oversight mechanisms by which DOD will monitor how peacetime airlift business is distributed to ensure that its CRAF incentive program is working as intended; and establishes and describes the mechanisms by which DOD includes CRAF participants to provide comments on pending program decisions and in information sharing, and that includes objectives and measures of effectiveness of these activities. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD did not agree with our first recommendation to conduct risk assessments regarding the declining charter passenger capability and the lack of an outsized cargo capability as part of the Civil Reserve Air Fleet, but partially agreed with that part of the recommendation to develop policies and procedures to mitigate any identified risks. DOD agreed with our second recommendation to develop policy for aspects of the CRAF program. DOD’s comments are reprinted in appendix II. While DOD disagreed with our recommendation to conduct risk assessments, DOD agreed with the value of conducting a risk assessment on the declining U.S. charter passenger charter capability, stating that this has already been evaluated by the CRAF viability study conducted by IDA DOD also stated that, based on the recommendations of the IDA study, DOD is already examining the declining passenger charter fleet and potential mitigation strategies. However, as we stated in our report, IDA’s report included data that stopped at 2007 and did not include data regarding the 2008 business termination of a carrier that provided nearly 50 percent of the charter passenger capability available to DOD. Also, while the IDA report stated that the charter passenger industry may become marginal, data analysis that supported this statement did not establish the maximum and minimum acceptable risk for the CRAF program. Therefore, we continue to believe that our recommendation to establish acceptable risk levels is still viable and important. DOD also disagreed with the second part of our recommendation concerning the need to conduct a risk assessment on the lack of a CRAF outsized cargo capability, stating that the CRAF program is not intended to provide outsized cargo capability. In their comments, DOD stated that its use of foreign carriers to transport outsized cargo is a strategy to reduce costs, save military flying hours and flight crews for higher priority missions, reduce military footprint, or provide flexible contract length/timing. DOD also stated that it is not an indication of a shortfall in the DOD outsize cargo capability or the CRAF program. However, as we reported, DOD used foreign-owned carriers flying AN-124 aircraft to move high priority outsized cargo (MRAPs) into Iraq instead of the organic fleet of C-5s and C-17s. We did not state that there was a shortfall in either the CRAF program or DOD outsized capability. Rather, we point out that, if DOD is to know whether there is a shortfall, gap, or redundancy in that capability, particularly given the addition of over 15,000 MRAPs, they would need to do a risk assessment. We continue to believe that a risk assessment of this issue would give DOD specific information that would help it shape future strategic transportation requirements. DOD partially agreed with the third part of our recommendation pertaining to the need to develop appropriate policies and procedures for mitigating any identified risks regarding the decline of charter passenger capability and lack of outsized cargo capability. DOD stated that U.S. Transportation Command is examining potential mitigation strategies for the declining U.S. passenger charter segment. However, during our review, we found no evidence that U.S. Transportation Command was developing policies and procedures to mitigate any risks associated with declining charter passenger capability and outsized cargo capability. DOD disagreed with the need to develop any mitigation strategies for an outsized cargo capability since CRAF is not intended to carry outsized cargo. As stated above, DOD’s use of foreign-owned carriers to move outsized MRAPs would lead us to believe that there might be a future need for policies and procedures to mitigate any shortfall or gap. DOD agreed that there is a need for comprehensive policy governing all of the CRAF program elements identified in our draft report. However, DOD did not identify what, if any, specific actions it would take in response to our recommendation. We encourage DOD to establish enforcement guidelines for CRAF business rules; objectives and measures of effectiveness for modernization; oversight mechanisms describing how peacetime business should be distributed; and mechanisms for information sharing. We are sending copies of this report to interested congressional committees; the Secretary of Defense; and the Under Secretary of Defense (Acquisition, Technology and Logistics). In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Recent Department of Defense Studies Sustaining the Civil Reserve Air Fleet (CRAF) Program, Institute for Defense Analyses, May 1, 2003. Economic Review of the Civil Reserve Air Fleet (CRAF) Program, Institute for Defense Analyses, December 15, 2007. Civil Reserve Air Fleet (CRAF) Study Report, Council for Logistics Research, July 13, 2008. Civil Reserve Air Fleet: Economics and Strategy, Institute for Defense Analyses, August 22, 2008. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Ann Borseth, Assistant Director; Renee Brown; Jeremy Hawk; Jeffrey R. Hubbard, analyst-in-charge; Mae Jones; Karen Thornton; and Steve Woods made key contributions to this report. Related GAO Products MRAP Rapid Acquisition: Rapid Acquisition of Mine Resistant Ambush Protected Vehicles. GAO-08-884R. Washington, D.C.: July 15, 2008. Airline Deregulation: Reregulating the Airline Industry Would Likely Reverse Consumer Benefits and Not Save Airline Pension. GAO-06-630. Washington, D.C.: June 9, 2006. Commercial Aviation: Bankruptcy and Pension Problems Are Symptoms of Underlying Structural Issues. GAO-05-945. Washington, D.C.: September 30, 2005. Commercial Aviation: Legacy Airlines Must Further Reduce Costs to Restore Profitability. GAO-04-836. Washington, D.C.: August 11, 2004. Foreign Investment in U.S. Airlines: Issues Relating to Foreign Investment and Control of U.S. Airlines. GAO-04-34R. Washington, D.C.: October 30, 2003. Military Readiness: Civil Reserve Air Fleet Can Respond as Planned, but Incentives May Need Revamping. GAO-03-278. Washington, D.C.: December 30, 2002.
To move passengers and cargo, the Department of Defense (DOD) must supplement its military aircraft with cargo and passenger aircraft from commercial carriers participating in the Civil Reserve Air Fleet (CRAF) program. Carriers participating in CRAF commit their aircraft to DOD to support a range of military operations. In the Fiscal Year 2008 National Defense Authorization Act, Congress required DOD to sponsor an assessment of CRAF and required GAO to review that assessment. GAO briefed congressional staff on its observations. As discussed with the staff, GAO further analyzed some of the issues identified in its review. This report assesses (1) the extent to which DOD has assessed potential risks to the CRAF program, and (2) the extent to which DOD's management of CRAF supports program objectives. For this engagement, GAO reviewed DOD-sponsored CRAF study reports and interviewed study leadership. GAO also interviewed over 20 of 35 CRAF participating carriers that responded to a request for a meeting, DOD officials, and industry officials. DOD needs to establish the level of risk associated with declining charter passenger capabilities and DOD's increased need to move very large cargo. Although DOD depends on CRAF charter passenger aircraft to move more than 90 percent of its peacetime needs, there has been nearly a 55 percent decline in this CRAF capacity since 2003. In addition, since 2003, DOD's large cargo movement needs have increased with the acquisition of over 15,000 Mine Resistant Ambush Protected vehicles. Since there are no U.S. commercial cargo aircraft capable of moving cargo this size into Iraq and Afghanistan, DOD is using foreign-owned carriers to assist its military aircraft in such movements. However, there are scenarios where foreign-owned carriers may be unwilling or not allowed to fly. As a result, the lack of a commercial U.S. outsized cargo capability might restrict DOD's ability to meet its large cargo airlift needs in a timely manner. DOD has not quantified the risks these challenges pose to the CRAF program's ability to meet DOD's future transportation requirements because DOD has not completed risk assessments as described in the 2008 National Defense Strategy. Until risk assessments are conducted, DOD will not be sufficiently informed about potential risks in the CRAF charter passenger segment and in very large cargo airlift capability that could prevent DOD from managing its future airlift needs and the CRAF program effectively. DOD's management of CRAF has not provided CRAF participants with a clear understanding, which could strengthen the program's ability to support its objectives, in some critical areas of the program. Although internal controls such as policies can help meet program objectives, CRAF business partners do not have a clear understanding of DOD's expectations concerning four CRAF objectives--an enhanced mobilization base, modernization, increased air carrier participation, and communication--because DOD has not developed policies in these four areas. First, DOD has not developed policies regarding the enforcement of its business rules, such as the 60/40 rule that states that participants should fly only 40 percent of their total business for DOD. DOD does not consistently enforce this rule and this may decrease the mobilization base since it is difficult for carriers to size their fleets to meet DOD demands. Second, DOD has not developed policies or economic incentives that promote CRAF modernization and this may hinder CRAF carriers from modernizing their aircraft. Third, DOD has not developed policies regarding oversight of the distribution of its peacetime airlift business, the primary incentive to carriers for participating in CRAF. DOD has no involvement in this distribution, and the perceptions of some carriers that this process is unfair could ultimately reduce carrier participation in CRAF. Fourth, DOD has not developed policy concerning communication with the carriers on CRAF studies or proposed changes to the CRAF program. DOD has not always communicated with carriers prior to implementing changes or completing studies. Until DOD develops policies that provide carriers with a clear understanding of CRAF, DOD cannot provide reasonable assurance that CRAF will meet its primary objective of providing critical airlift.
Background Pests—weeds, insects, and pathogens—can cause significant crop losses. Since World War II, producers have relied primarily on chemical pesticides for pest management, contributing to tremendous gains in farm productivity. For example, average corn yields per acre have more than tripled over the last 50 years, partially because of chemical pesticides. As a result, our food supply is relatively inexpensive and abundant compared with that of other nations. Maintaining such productivity is important not only for meeting current needs, but also for meeting the future needs of a growing world population. While the use of chemical pesticides has resulted in important benefits, their use also can have unintended adverse effects on human health and the environment. Exposure to pesticides can cause a range of ill effects in humans, from relatively mild effects such as headaches, fatigue, and nausea to more serious effects such as cancer and neurological disorders. In 1999, EPA estimated that nationwide there were at least 10,000 to 20,000 physician-diagnosed pesticide illnesses and injuries per year in farm work. Environmental effects are evident in the findings of the U.S. Geological Survey, which reported in 1999 that more than 90 percent of water and fish samples from streams and about 50 percent of all sampled wells contained one or more pesticides. The concern about pesticides in water is especially acute in agricultural areas, where most pesticides are used. Furthermore, the use of chemical pesticides has caused or exacerbated some pest problems. Chemical pesticides become less effective as pests develop resistance to them, just as human pathogens develop resistance to antibiotics. As a result, growers increase pesticide applications and eventually switch to other pesticides that also may become ineffective. More than 500 insect pests, 270 weed species, and 150 plant diseases are now resistant to one or more pesticides, making these pests harder and more costly to control. In addition, many chemical pesticides kill not only the target pests but also eliminate beneficial organisms that would naturally help keep pest populations in check. Without the benefit of these natural controls, growers become more dependent on chemical pesticides, further exacerbating resistance problems. Because of this scenario, sometimes referred to as the “pesticide treadmill,” the National Academy of Sciences concluded that there is an urgent need for an alternative approach to pest management that can complement and partially replace chemically-based pest management practices. For several decades, the federal government also has recognized the need to combine a wide array of crop production practices to effectively control pests before they reach economically damaging levels—a strategy known as integrated pest management. The IPM strategy combines cultural, genetic, biological, and chemical pest-control methods, as well as careful monitoring of pests and their natural enemies. IPM practices and methods vary among crops and regions of the country. For example, in some regions, growers introduce insects that naturally prey on particular pests. In other areas of the country, growers use combinations of pest management practices, including rotating crops, altering planting dates, or planting pest-resistant crop varieties. In December 1977 the Secretary of Agriculture announced that USDA’s policy was to develop and encourage the use of IPM to adequately control pests while causing the least harm to human health and the environment. During the ensuing years, USDA undertook research, development, and demonstration activities to support IPM adoption. In 1993, the Deputy Secretary of Agriculture, with the support of the EPA Administrator, renewed the federal government’s commitment to IPM by setting a goal that IPM would be implemented on 75 percent of total crop acreage by 2000 to reduce pesticide use and the associated risks. In 1994, USDA announced an initiative to help achieve the goal through research, outreach, and education. Several USDA agencies are involved in the IPM initiative. USDA’s Office of Pest Management Policy (OPMP) is the department’s lead office on pest management policy, with responsibility for coordinating USDA’s IPM activities. USDA’s Agricultural Research Service conducts research on pests that have a major national impact on agriculture and tests biological IPM techniques over large land areas. USDA’s Cooperative State Research, Education, and Extension Service provides research grants to state and land-grant universities to enhance understanding of IPM-related topics such as life cycles of pests and beneficial organisms, pest resistance to chemical control, and the development of pest-resistant crop varieties. The extension service also helps to provide IPM information to growers through education, outreach, and training programs. USDA’s Natural Resources Conservation Service helps to support grower implementation of IPM practices through education, outreach, and limited financial incentives. USDA’s Forest Service also conducts IPM-related research, such as studying IPM methods for controlling invasive weeds. In addition, USDA’s National Agricultural Statistics Service and USDA’s Economic Research Service gather and analyze information about IPM. USDA estimates that in fiscal year 2000, the department spent about $170 million on activities in support of IPM adoption. In addition, EPA awarded grants totaling about $500,000 in fiscal year 2000 for research and outreach to support IPM implementation. USDA Estimates That IPM Has Been Implemented on About 70 Percent of Crop Acreage, but USDA Has Not Focused IPM on Meaningful Outcomes Based on a sample of growers, USDA estimates that some level of IPM had been implemented on about 70 percent of the nation’s crop acreage as of the end of crop year 2000, an implementation rate close to USDA’s 75-percent goal. However, this implementation rate is not a good indicator of progress toward an originally intended purpose of IPM— reducing chemical pesticide use. In estimating the IPM implementation rate, USDA counts a wide variety of farming practices without distinguishing between those practices that tend to reduce chemical pesticide use and those that may not. In fact, our analysis of USDA’s data shows that the subset of IPM practices that tend to reduce reliance on chemical pesticides, often referred to as biologically-based practices, has been far more sparsely implemented than the overall IPM rates indicate. For example, while USDA estimated that IPM had been implemented on 76 percent of corn acreage in crop year 2000, the implementation rates of biologically-based IPM practices on corn cropland ranged from less than 1 percent for disrupting pest mating to about 18 percent for use of biological pesticides. USDA Estimates That IPM Has Been Implemented on About 70 Percent of Crop Acreage USDA established its goal of implementing IPM on 75 percent of U.S. crop acreage in 1993, but USDA did not develop its current IPM definition and method for measuring progress toward that goal until 1997. Beginning in that year, USDA’s National Agricultural Statistics Service collected data annually on the implementation of various farming practices. The service, at the request of OPMP, grouped about 25 farming practices into four IPM categories—prevention, avoidance, monitoring, and suppression (PAMS). Prevention practices keep a pest population from infesting a crop or field. These practices include removing crop residue, cleaning implements after fieldwork, and tilling the soil to manage pests. Avoidance practices are used when pest populations exist in a field but crop damage can be avoided. These practices include adjusting planting dates, rotating crops, and planting crop varieties that are genetically modified to resist insects, pathogens, or nematodes. Monitoring practices provide proper identification of pests and information about the extent and location of pest infestations. These practices include pest trapping, weather monitoring, and soil testing. Suppression practices control infestations when pest levels become economically damaging. These practices include applying biological pesticides, preserving or releasing beneficial organisms that reduce pest populations, and using pheromones to disrupt mating. For acreage to be counted toward the IPM goal, USDA’s definition calls for growers to implement on their land at least one farming practice in three of the four PAMS categories. A detailed explanation of USDA’s PAMS categories and IPM practices is given in appendix III. Using the method discussed above, USDA estimated that IPM implementation gradually increased from 51 percent of crop acreage in 1997, to 57 percent in 1998, to 58 percent in 1999. In 2000, IPM implementation jumped to an estimated 71 percent. The National Agricultural Statistics Service and OPMP are uncertain of the reasons for this sudden increase, although they offered several possible explanations for the change. The service cited extremely low commodity prices, combined with escalating energy and input costs, among other conditions, as possible reasons for growers to use a broader range of pest management practices in an attempt to reduce their costs. In addition, both the service and OPMP noted that the methods for collecting pest management data changed from on-site interviews to telephone interviews, which may have affected the responses received. An OPMP official told us that the survey results suggest that certain survey questions may have been misinterpreted. For example, the survey results indicate a decrease in the use of genetically-modified crop varieties in cotton, and an increase in the use of biological pesticides in cotton—trends that are contrary to the OPMP official’s expectations. Notwithstanding the uncertainty about the reasons for the jump in IPM implementation between 1999 and 2000, the IPM estimate is not a good indicator of progress toward reducing chemical pesticide use. Crop acreage can be counted in the IPM estimate even if growers use a combination of practices that may result in little or no reduction in pesticide use. Economic Research Service economists found that some IPM practices, such as monitoring for pests or clearing fields of crop residue, either increased or had little effect on chemical pesticide use.However, the economists found that biologically-based IPM practices— such as protecting beneficial organisms or disrupting pest mating— reduced pesticide use and toxicity substantially. Yet, USDA’s definition of IPM does not distinguish biologically-based practices from other IPM practices, and USDA’s estimate includes acreage that received none of the biologically-based practices that tend to reduce pesticide use. Implementation of Biologically-Based IPM Practices Is Limited USDA’s 1994 strategic plan stated that the department’s policy was to support implementation of “biologically-based” IPM practices. In 1998, USDA reported to Congress that some crops were managed under “rudimentary” IPM methods, and that the IPM initiative would be geared toward helping growers move toward more biologically-based practices. In addition, EPA representatives told us that their agency has tried to encourage the adoption of biologically-based pest management practices. In spite of these policy statements, USDA’s IPM definition does not emphasize biologically-based pest management practices. As a result, while the USDA implementation rate indicates relatively broad adoption of IPM, the adoption of biologically-based practices is much more limited. As shown in table 1, the implementation rates of biologically-based practices are relatively low in all crops, particularly compared to USDA’s estimate of overall IPM implementation for those crops. IPM Has Resulted in Some Environmental and Economic Benefits, but Use of the Riskiest Pesticides Remains Substantial USDA-sponsored research projects, various grower associations, and major food processors have demonstrated that some IPM practices can reduce pesticide use as well as pest management costs, while still maintaining crop yield quality and quantity. Furthermore, the National Academy of Sciences and the American Crop Protection Association report that IPM leads to better long-term pest management because reliance on chemical controls alone reduces their effectiveness due to pest resistance. However, while IPM has yielded significant benefits in certain crops and locations, IPM does not yet appear to have quantifiably reduced nationwide chemical pesticide use. In fact, total use of agricultural pesticides, measured in pounds of active ingredient, has actually increased since the beginning of USDA’s IPM initiative. Use of a subset of chemical pesticides, identified by EPA as the riskiest, has declined somewhat since the IPM initiative began. However, use of this subset still comprises over 40 percent of total agricultural pesticide use. IPM Practices Have Produced Environmental and Economic Benefits in Specific Crops USDA research scientists, crop growers, and food processors provided us information demonstrating that in several crops and locations, the use of IPM practices reduced pesticide use or toxicity, as well as pest management costs, without sacrificing crop quality or yield. Apple and pear growers in Washington, Oregon, and California, in conjunction with USDA’s Agricultural Research Service, used a biologically-based IPM practice to control the codling moth, the key pest of these fruits in the western United States. Previously, toxic chemicals had been used to control codling moths. In 1995, the Agricultural Research Service organized apple and pear growers over a large area of the three states to employ an alternative pest-management strategy using pheromones to control the codling moth. Pheromones mimic the scent of female insects to attract male insects, reducing pest mating and thereby reducing pest populations. This project has reduced the need for chemical pesticides by at least 80 percent, reduced pest management costs, and produced a higher-quality harvest with at least a 60-percent reduction in codling moth damage. Potato growers in Wisconsin, in conjunction with the World Wildlife Fund, the University of Wisconsin, and EPA, used biologically-based IPM practices to control the weeds, insects, and diseases that damage potato production. Conventional pest management for potatoes involves heavy use of chemical pesticides. To reduce the use of high- risk pesticides, Wisconsin potato growers adopted IPM practices that enhance the potato plant’s natural ability to resist pests, and switched to reduced-risk pesticides that do not adversely affect beneficial insects. As a result, the growers reduced their use of potentially toxic pesticides by nearly half a million pounds between 1997 and 2000. Many growers have found that profits increased because of the reduced costs for chemical pesticides. Several major food processors encourage their growers to use IPM practices as a means to significantly reduce the amount of chemical pesticides applied to crops. For example, one food processor assists its vegetable growers in using IPM practices, including release of beneficial insects, disruption of pest mating, and application of biological pesticides. According to the food processor, the number of synthetic pesticide applications on crops grown for the company has been reduced by 50 percent or more, production costs have been reduced, and crop yield and quality have been maintained. For example, a group of growers in the processor’s IPM program eliminated their use of synthetic pesticides, reduced their insect management costs, and experienced 85 percent less insect damage on tomatoes than non-IPM growers. These results were achieved through using pheromones to disrupt pest mating and through applying biological pesticides. In addition to these results, the National Academy of Sciences reports that IPM also helps to provide better long-term pest control than chemical control alone. According to the academy, U.S. cotton production provides a compelling example of the limitations of relying on chemical pesticides alone. Years of widespread use of chemical pesticides in cotton eventually resulted in elimination of the natural organisms that controlled cotton pests. Populations of cotton pests increased despite increased pesticide applications, and the pests became resistant to chemical control. As a result, acreage planted to cotton decreased dramatically in the southeastern states, and cotton production was threatened in Texas and California. Finally, the development of an IPM program, which combined reduced pesticide applications with mating disruption and other IPM practices, brought the cotton pests under control and helped restore cotton production. The IPM program resulted in reduced pest-control costs, and it increased yields, land values, and acreage planted in cotton. The American Crop Protection Association, a group representing manufacturers, formulators, and distributors of pesticides and other crop protection products, concurs that IPM provides better crop protection than chemical control alone. The association recognizes that combining the use of chemical pesticides with other IPM strategies prolongs the effectiveness of chemical pesticides by minimizing the development of pest resistance. Similarly, the Global Crop Protection Federation, a worldwide association representing the crop protection industry, views IPM as “the way forward for the crop protection industry.” Specifically, the federation states that IPM provides stable and reliable yields and production, reduces the severity of pest infestations, reduces the potential for problems of pest resistance, and secures the agricultural environment for future generations. The Riskiest Subset of Pesticides Still Comprises a Substantial Portion of Agricultural Pesticide Use Although some IPM practices have resulted in significant reductions in pesticide use, nationwide use of chemical pesticides in agriculture has not declined since the beginning of the IPM Initiative. Chemical pesticide use in agriculture—which accounts for about three-fourths of all pesticides used in the United States—has increased from about 900 million pounds in 1992 to about 940 million pounds in 2000, according to EPA, even as total cropland has decreased. However, data on total pesticide use aggregates relatively benign pesticides, such as sulfur and mineral oil, with more risky chemical pesticides, including organophosphates, carbamates, and probable or possible carcinogens. This subset of pesticides—which has been identified by EPA as posing the greatest risk to human health—is suspected of causing neurological damage, cancer, and other adverse human health effects. As shown in figure 1, use of the riskiest subset of pesticides decreased from 455 million pounds of active ingredient in 1992 to about 390 million pounds in 2000. However, use of the riskiest pesticides still accounts for over 40 percent of the pesticides used in U.S. agriculture. The reasons for the decreased use of the riskiest pesticides are unclear. However, EPA officials suggested that the decrease may have occurred because some pesticides (1) were discontinued because of EPA regulatory action; (2) were discontinued because of business decisions by the chemical pesticide industry; (3) became noncompetitive compared to newer, cheaper pesticides; (4) became less effective as the target pests developed resistance; or (5) were used less with the introduction of crop varieties genetically modified to resist insects. USDA officials added that use of the riskiest pesticides may have declined because some growers have made progress in implementing nonchemical pest management practices for some crops. Several Impediments Limit Realization of IPM’s Potential Benefits USDA’s initial commitment to the IPM initiative has not been buttressed with the management infrastructure necessary to maximize the benefits of IPM in American agriculture. Specifically, USDA has not provided any departmental entity with the authority necessary to lead the IPM initiative. Furthermore, six USDA agencies, state and land-grant universities, and EPA are all conducting IPM-related activities with little or no coordination of these efforts. Moreover, USDA has vacillated about the intended results of the IPM initiative, causing confusion among IPM stakeholders about what IPM is supposed to achieve. As a result of these shortcomings, considerable federal resources are being spent on IPM without a clear sense of purpose and priorities, and thus a number of farm-level impediments remain unaddressed. Such impediments include insufficient delivery of IPM information to growers, the growers’ perceived financial risks of adopting IPM practices, and the higher cost of some alternative pest management products and practices. Although IPM stakeholders suggested that federal efforts and/or financial subsidies might alleviate farm-level impediments to IPM, it is questionable whether such efforts would be effective unless the management deficiencies of the IPM initiative are corrected first. The Government Performance and Results Act calls for linking intended results of federal efforts to program approaches and resources, and thus provides a framework for USDA to address the management deficiencies of its IPM efforts. The IPM Initiative Is Hampered by Serious Leadership, Coordination, and Management Deficiencies When USDA launched its IPM initiative in 1994, the department announced that the initiative would combine the IPM-related activities of USDA agencies into a single department-wide effort. However, the department did not endow any entity with the authority necessary to lead such an effort. Instead, authority over IPM resources remains fragmented among the multiple USDA agencies involved in the IPM initiative. At the outset of the initiative, USDA established the IPM Coordinating Committee, consisting of representatives from the agencies with responsibilities for IPM research and implementation. The committee’s role was to provide interagency guidance on policies, programs, and budgets—albeit without actual decision-making authority. In 1998, the functions of the committee were transferred to the newly created Office of Pest Management Policy (OPMP). However, OPMP, like its predecessor, was not given authority to direct the department’s IPM activities and spending. OPMP’s Director acknowledged that the office does not have sufficient authority to lead the IPM initiative. Lack of effective coordination is another major shortcoming of the IPM initiative. We recently reported that crosscutting programs—such as IPM—that are not effectively coordinated waste scarce funds, confuse and frustrate program stakeholders, and undercut the overall effectiveness of the federal effort. When the IPM initiative began, USDA acknowledged that strong coordination among the department’s agencies and between the department and other public and private-sector organizations would be required to effectively support IPM implementation. Early in the initiative, USDA attempted such coordination through its IPM Coordinating Committee. In 1998, USDA transferred the coordination responsibility to OPMP and stated in a report to the Congress that it was “committed to maximizing the impact of existing resources by improving the coordination of IPM and related pest management programs.” However, OPMP has done little to coordinate IPM activities, according to officials from several USDA agencies, EPA, and the crop protection industry. As a result, six USDA agencies, state and land-grant universities, and EPA are conducting IPM activities with no assurance that federal resources are being used on the highest priorities, or that duplication and gaps in efforts are being avoided. For example, EPA, the Agricultural Research Service, the Cooperative State Research, Education, and Extension Service, and the Forest Service all conduct or provide grants for IPM research without a coordination mechanism in place. Moreover, the crop protection industry conducts substantial research related to IPM, but USDA does not coordinate federal research with private-sector research. Representatives from the American Crop Protection Association told us that there is little interaction between government and industry on IPM-related research, although the association has approached USDA about coordinating research efforts. The IPM initiative also lacks clear objectives that articulate the results to be achieved from federal expenditures, a key prerequisite to effective management, as emphasized in the Government Performance and Results Act of 1993. Although USDA set a goal of having 75 percent of the nation’s crop acreage under IPM practices by 2000, the department has vacillated on the intended results of achieving this goal. Initially, the Deputy Secretary of Agriculture clearly stated that the IPM initiative was intended to reduce pesticide use. Subsequently, USDA’s strategic plan for IPM stated that IPM was intended to “meet the needs of agriculture and the American public” but made no mention of reduced pesticide use as an intended result. During the course of our review, USDA and EPA suggested that an appropriate objective for IPM could be reduction in pesticide risk to human health and the environment, but neither agency adopted that objective. The federal IPM initiative’s lack of clarity on intended results has caused confusion among IPM stakeholders across the nation. For example, a survey of 50 state IPM coordinators indicated that, of the 45 respondents, 20 believed that the IPM initiative is primarily intended to reduce pesticide use, 23 did not, and 2 were undecided. During the course of our review, we met with members of a national IPM committee representing state land-grant university scientists involved with IPM. Most of the members of this committee evidenced confusion about the environmental results the IPM initiative is intended to accomplish, and stated that the federal government, particularly EPA, needs to provide clearer guidance on this matter. Several other IPM stakeholders we interviewed during the course of our work echoed the need for clearer guidance to focus the IPM initiative on tangible environmental results. A related management shortcoming of the federal IPM initiative is that USDA has not devised a method for measuring the environmental or economic results of IPM implementation. In USDA’s 1994 strategic plan for implementation of IPM, the department stated that it would assess the economic and environmental impacts of IPM. However, very limited progress has been made in this area. Researchers have conducted some studies of IPM’s results, but only for certain crops and locations. Although economists from USDA’s Economic Research Service have summarized these studies, service officials acknowledge that no method exists to comprehensively or systematically measure the national environmental and economic results of IPM. Service officials told us that they have been trying to develop a method for measuring IPM’s results, but have not done so—7 years after recognizing the need to assess the environmental and economic results of IPM. Moreover, as the officials stated, it is difficult to assess the initiative’s results when the department has not clearly articulated the initiative’s intended outcomes. Farm-Level Impediments Limit IPM Implementation As a result of deficiencies in the leadership, coordination, and management of the IPM initiative, a number of farm-level impediments to IPM implementation remain largely unaddressed, including the following: IPM implementation requires that growers have current information on the latest technologies and how to use them. Crop consultants, both in the public sector and the private sector, can provide such information and assistance to growers. In 1994, USDA’s Economic Research Service stated that inadequate knowledge of IPM alternatives and too few crop consultants to deliver IPM services were impediments to IPM adoption. In 2000, representatives of the land-grant universities involved in IPM acknowledged that in many areas of the country there are not enough crop consultants to assist growers in implementing IPM, particularly for lower-value crops such as corn and soybeans. Some growers are reluctant to adopt IPM because of a concern that alternative pest management practices could increase the risk of crop losses. Crop insurance is one way to reduce that perceived or actual risk, and in 1994 USDA committed to using its crop insurance programs to encourage grower adoption of IPM practices. However, our discussions with IPM stakeholders indicated that little progress has been made in this regard. The IPM Institute of North America recently received a USDA Small Business Innovation Research grant to study the potential for providing crop insurance for growers who implement IPM in corn and cotton, but the federal crop insurance program does not yet cover losses related to IPM implementation. Some of the pesticides that pose reduced risks to human health and the environment are more expensive than conventional chemical pesticides. In addition, because reduced-risk pesticides generally are pest-specific, more than one of them may be necessary to replace any one conventional broad-spectrum pesticide. Many IPM stakeholders we interviewed from USDA, EPA, the land-grant universities, and the private sector told us that the higher cost of reduced-risk pesticides is a major impediment to IPM adoption. IPM stakeholders suggested the need for federal involvement to address these impediments. For example, some suggested that the federal government could foster crop consulting by subsidizing grower costs for these services. IPM stakeholders also suggested that the federal government could subsidize the cost of special insurance to reduce the financial risk of adopting IPM, just as the government subsidizes the cost of traditional crop insurance. Further, IPM stakeholders suggested that the federal government could subsidize grower costs for reduced-risk pesticides. While these measures might help advance IPM implementation, they would involve substantial federal expenditures. Without first improving USDA’s management infrastructure, the department’s ability to solve farm-level impediments will continue to be hampered. Conclusions Chemical pesticides play an important role in allowing Americans to enjoy an abundant and inexpensive food supply. However, these chemicals can have adverse effects on human health and the environment, and their long- term effectiveness will be increasingly limited as pests continue developing resistance to them. Consequently, it has become clear that sustainable and effective agricultural pest management will require continued development and increased use of alternative pest management strategies such as IPM. Some IPM practices yield significant environmental and economic benefits in certain crops, and IPM can lead to better long- term pest management than chemical control alone. However, the federal commitment to IPM has waned over the years. The IPM initiative is missing several management elements identified in the Government Performance and Results Act that are essential for successful implementation of any federal effort. Specifically, no one is effectively in charge of federal IPM efforts; coordination of IPM efforts is lacking among federal agencies and with the private sector; the intended results of these efforts have not been clearly articulated or prioritized; and methods for measuring IPM’s environmental and economic results have not been developed. Until these shortcomings are effectively addressed, the full range of potential benefits that IPM can yield for producers, the public, and the environment is unlikely to be realized. Recommendations We recommend that the Secretary of Agriculture establish effective department-wide leadership, coordination, and management for federally funded IPM efforts; clearly articulate and prioritize the results the department wants to achieve from its IPM efforts, focus IPM efforts and resources on those results, and set measurable goals for achieving those results; develop a method for measuring the progress of federally funded IPM activities toward the stated goals of the IPM initiative. If the Secretary of Agriculture determines that reducing the risks of pesticides to human health and the environment is an intended result of the IPM initiative, we also recommend that the Secretary collaborate with EPA to focus IPM research, outreach, and implementation on the pest management strategies that offer the greatest potential to reduce the risks associated with agricultural pesticides. Agency Comments We provided USDA and EPA with drafts of this report for their review and comment. In response, the Secretary of Agriculture agreed with our assessment of the IPM program and stated that, based on our recommendations, USDA plans to make the management of the program a high priority. In addition, she stated that USDA will (1) develop a comprehensive, authoritative, and focused roadmap for IPM; (2) prioritize the results that USDA wants to achieve; and (3) set measurable goals for the IPM initiative and devise methods for measurement of progress toward the goals. (See app. IV.) The Director of EPA’s Office of Pesticide Programs said that EPA appreciated our efforts to highlight this issue, and that promoting IPM is an important component of EPA’s approach toward reducing risks posed by pesticides. The Director also acknowledged that as efforts to promote IPM continue, EPA/USDA cooperation will become even more vital. (See app. V.) We conducted our review from September 2000 through June 2001 in accordance with generally accepted government auditing standards. See appendix I for our scope and methodology. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to other congressional committees with jurisdiction over agriculture programs; the Secretary of Agriculture; and the Administrator, EPA. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please call me on (202) 512-3841. Key contributors to this report are listed in appendix VI. Appendix I: Scope and Methodology To assess the level of adoption of integrated pest management (IPM) in U.S. agriculture, we analyzed the U.S. Department of Agriculture’s (USDA) data on pest management practices from the National Agricultural Statistics Service’s annual Fall Agricultural Survey for crop years 1997 through 2000. The service had published the results of its survey as individual pest management practices, but it had not yet analyzed the data to assess progress toward the 75-percent goal. Therefore, we requested that the service analyze the data using USDA’s definition of IPM in order to assess the overall rate of implementation. We spoke with officials at USDA to determine which pest management practices are considered biologically-based. We then examined the adoption rates of the biologically-based subset of pest management practices. To assess the environmental and economic results of IPM, we (1) interviewed IPM stakeholders in the government, academic, agriculture, nonprofit, trade/commodity association, and corporate sectors; (2) examined related government and nongovernment reports and documentation about IPM; and (3) analyzed use of the subset of agricultural pesticides riskiest to human health. Stakeholders interviewed included USDA officials from the Agricultural Research Service; Economic Research Service; Cooperative State Research, Education, and Extension Service; National Agricultural Statistics Service; Natural Resources Conservation Service; and Office of Pest Management Policy. We also interviewed officials from the Environmental Protection Agency (EPA) and the U.S. Geological Survey. In addition, we spoke with scientists from major land-grant universities about their research on the environmental and economic effects of IPM. We also interviewed individual farmers, commodity groups representing farmers, private crop consultants, and the crop protection industry. We examined supporting documentation from these groups to assess what is known about the overall environmental and economic impact of IPM adoption. To assess whether IPM adoption has resulted in a measurable decline in the use of agricultural chemicals, we reviewed available data on pesticide use from EPA and the National Center for Food and Agricultural Policy. We also analyzed changes in the use of a subset of pesticides identified by EPA as the riskiest to human health and the environment. To determine whether there are impediments that limit IPM adoption and realization of its potential benefits, we checked for USDA management- level impediments to effectively promoting IPM, as well as for farm-level impediments to adopting IPM practices. In assessing any management- level impediments, we compared early documentation from USDA and EPA about the IPM initiative’s objectives and management strategies with progress toward implementing those objectives and strategies. We discussed the causes of any shortcomings with representatives from the various agencies involved in the IPM initiative, as well as with other IPM stakeholders. To assess any farm-level impediments, we interviewed and obtained supporting documentation from individual growers, commodity group representatives, private crop consultants, the Cooperative State Research, Education, and Extension Service’s state IPM coordinators, and the Agricultural Research Service’s Office of Technology Transfer, in addition to the government officials listed above. We conducted our review from September 2000 through June 2001 in accordance with generally accepted government auditing standards. Appendix II: Sampling Error of Estimates From the National Agricultural Statistics Service’s Integrated Pest Management Survey The estimated percentage of acres under IPM practices for crop year 2000 that we provided in table 1 was developed by the National Agricultural Statistics Service from a survey of farmers. Because the survey covered a sample of farmers rather than all farmers, the estimates are subject to sampling error. We obtained from the National Agricultural Statistics Service the information needed to estimate the sampling error, at a 95- percent confidence level, for USDA’s IPM estimates by crop. For the estimates of combinations of crops and pest management practices in table 1, the service provided general information about the reliability of the estimates but did not provide the information needed to compute the sampling error for each estimate. The sampling errors for USDA’s year 2000 IPM estimates by crop ranged from 3 to 17 percent. The smallest sampling error was for soybeans; the estimated percentage of acres under IPM was 78 percent plus or minus 3 percent. The largest sampling error was for fruits and nuts; the estimated percentage of acres under IPM was 62 percent plus or minus 17 percent. Based on information provided by the National Agricultural Statistics Service, the sampling errors for the biologically-based IPM practices in table 1 vary by crop and can be large relative to the estimate. For practices that are not commonly used, the sampling error could be twice as large as the estimate. The National Agricultural Statistics Service indicates that these practices generally have insufficient data for publication. For more commonly used pest management practices, the sampling error for the national-level estimates ranges from about 2 to 40 percent of the estimate. For example, if the estimate that 15 percent of the cotton acres were planted in crop varieties genetically modified to resist insects had a sampling error that was 40 percent of the estimate, the sampling error of the estimate would be 40 percent of 15 (6 percentage points). Given an estimate of 15 percent with a sampling error of 6 percentage points, we could feel confident that between 9 and 21 percent (15 percent plus or minus 6 percent) of all cotton acreage was planted in varieties genetically modified to resist insects. Appendix III: USDA’s IPM Categories and Survey Questions This appendix contains information from USDA’s National Agricultural Statistics Service’s Pest Management Practices 2000 Summary. Prevention practices keep a pest population from infesting a crop or field. Prevention includes such tactics as using pest-free seeds and transplants, preventing weeds from reproducing, choosing cultivars with genetic resistance to insects or disease, scheduling irrigation to avoid situations conducive to disease development, cleaning tillage and harvest equipment between fields or operations, sanitizing fields, and eliminating alternate hosts or sites for insect pests and disease organisms. The following survey questions measured prevention practices: Did you clean tillage or harvesting implements after completing fieldwork for the purpose of reducing the spread of weeds, diseases or other pests? Did you remove or plow down crop residues to control pests? Did you use practices such as tilling, mowing, burning, or chopping of field edges, lanes, ditches, roadways or fence lines to manage pests? Did you use water management practices, such as controlled drainage or irrigation scheduling, excluding chemigation, to control pests? Avoidance practices are used when pest populations exist in a field or site but the impact of the pest on the crop can be avoided through some cultural practice. Examples of avoidance tactics include rotating crops so that the crop of choice is not a host for the pest, choosing cultivars with genetic resistance to pests, using trap crops, choosing cultivars with maturity dates that may allow harvest before pest populations develop, promoting rapid crop development through fertilization programs, and simply not planting certain areas of fields where pest populations are likely to cause crop failure. Prevention and avoidance strategies may overlap. The following survey questions measured avoidance practices: Did you use any crop varieties that were genetically modified to be resistant to insects (Bt, etc.)? Did you adjust planting or harvesting dates to control pests? Did you rotate crops for the purpose of controlling pests? Did you use any crop varieties that were genetically modified to be resistant to plant pathogens or nematodes causing plant diseases? Did you choose planting locations to avoid cross infestation of insects or disease? Did you grow a trap crop to help control insects? Monitoring practices include proper identification of pests through surveys or scouting programs, including trapping and soil testing where appropriate. The following survey questions measured monitoring practices: Was this crop scouted for pests (weeds, insects or disease) using a systematic method? Did you use field mapping of previous weed problems to assist you in making weed management decisions? Did you use soil analysis to detect the presence of soil-borne pests or pathogens? Did you use pheromones to monitor the presence of pests by trapping? Did you use weather monitoring to predict the need for pesticide applications? Suppression practices include cultural practices such as narrow row spacings, optimized in-row plant populations, no-till or strip-till systems, and cover crops or mulches. Physical suppression tactics may include mowing for weed control, baited traps for certain insects, and temperature management or exclusion devices for insect and disease management. Chemical pesticides are an important suppression tool, and some use will remain necessary. However, pesticides should be applied as a last resort in suppression systems. Biological controls, such as pheromones to disrupt mating, could be considered as alternatives to conventional pesticides, especially where long-term control of an especially troublesome pest species can be obtained. The following survey questions measured suppression practices: Did you use any crop varieties that were genetically modified to be resistant to specific herbicides (Roundup Ready, Liberty Link, Poast- Protected corn, STS soybean, IT corn)? Did you use scouting data and compare it to university or extension guidelines for infestation thresholds to determine when to take measures to control pests? Did you use beneficial organisms (insects, nematodes or fungi) to control pests? Did you use topically applied biological pesticides such as Bt (Bacillus thuringiensis), insect growth regulators, neem, or other natural products to control pests? Did you maintain ground covers, mulches or physical barriers to reduce pest problems? Did you adjust row spacing, plant density or row direction to control pests? Did you alternate pesticides to keep pests from becoming resistant to pesticides (use pesticides with different mechanisms of action)? Did you use pheromones to control pests by disrupting mating? Appendix IV: Comments From the Department of Agriculture Appendix V: Comments From the Environmental Protection Agency Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual above, Chuck Barchok, Patricia J. Manthe, Terrance N. Horner, Jr., Donald J. Sangirardi, Karen Bracey, and Cynthia Norris made key contributions to this report.
Chemical pesticides play an important role in providing Americans with an abundant and inexpensive food supply. However, these chemicals can have adverse effects on human health and the environment, and pests continue to develop resistance to them. Sustainable and effective agricultural pest management will require continued development and increased use of alternative pest management strategies, such as integrated pest management (IPM). Some IPM practices yield significant environmental and economic benefits in certain crops, and IPM can lead to better long-term pest management than chemical control alone. However, the federal commitment to IPM has waned over the years. The IPM initiative is missing several key management elements identified in the Government Performance and Results Act. Specifically, no one is effectively in charge of federal IPM efforts; coordination of IPM efforts is lacking among federal agencies and with the private sector; the intended results of these efforts have not been clearly articulated or prioritized; and methods for measuring IPM's environmental and economic results have not been developed. Until these shortcomings are addressed, the full range of potential benefits that IPM can yield for producers, the public, and the environment is unlikely to be realized.
Background Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector when the agencies determined that such action was cost-effective. OMB formalized the policy in its Circular A-76, issued in 1966. In 1979, OMB supplemented the circular with a handbook that included procedures for competitively determining whether commercial activities should be performed in-house, by another federal agency through an Interservice Support Agreement, or by the private sector. OMB has updated this handbook three times since 1979. An extensive revision to Circular A-76 was issued on May 29, 2003, based in part on the recent work of the congressionally mandated Commercial Activities Panel. Under the newly revised circular, agencies may convert commercial activities to or from contractor performance through a public-private competition, whereby the estimated cost of public or private performance of the function is evaluated against published selection criteria in accordance with the principles and procedures outlined in the circular. As part of this process, the government identifies the work to be performed in a “performance work statement,” prepares an in-house offer which includes its most efficient organization, and compares all the offers against each other and the selection criteria. The revised circular provides several alternative procedures for conducting source selections, only one of which allow agencies to select a contract based on other than the lowest cost technically acceptable offer. The four source selection alternatives are: sealed bid, lowest price technically acceptable, phased evaluation, and, in certain cases, trade-off (which permits agencies to weigh cost and non-cost factors). Administrative and legislative constraints from the late 1980s through 1995 resulted in a lull—and even a moratorium—on awarding contracts resulting from A-76 competitions. In 1995, congressional and administration initiatives placed more emphasis on A-76 as a means of achieving greater economies and efficiencies in operations. Beginning about 1995, DOD began to give renewed emphasis to the use of A-76 competitive sourcing under Circular A-76. More recently, competitive sourcing has received governmentwide attention, as one of five initiatives of the President’s Management Agenda for fiscal year 2002. DOD has been a leader among federal agencies in using A-76 in recent years. The revised circular requires agencies to prepare two annual inventories that categorize all activities performed by government personnel as either commercial or inherently governmental. A similar requirement was included in the 1998 Federal Activities Inventory Reform (FAIR) Act, which directs agencies to develop annual inventories of their positions that are not inherently governmental. DOD’s 2000 FAIR Act inventory identified nearly 453,000 in-house civilian positions engaged in a variety of commercial activities, nearly 260,000 of which have been, or are, subject to competition or direct conversion under Circular A-76. The number of positions subject to A-76 is less than the total number of positions in commercial activities because DOD made adjustments to exclude certain commercial activities from being considered eligible for competition; they included such reasons as statutory, national security, or operational considerations. Under the President’s Management Agenda, OMB has directed agencies to directly convert or compete through cost comparison studies 15 percent of their total fiscal year 2000 inventories of commercial activities by the end of fiscal year 2003, with the ultimate goal of competing at least 50 percent of their inventories by the end of fiscal year 2008. In providing guidance for determining whether activities and functions, and associated positions are considered to be inherently governmental in nature, DOD has sometimes equated the term “inherently governmental” with the somewhat parallel term “core.” While use of the term “core” is associated with the private sector, DOD has sometimes used the term to designate military and civilian essential positions required for military and national security reasons. The old A-76 Handbook provided yet another, but similar, meaning for core. In the context of A-76, core capability was defined as “a commercial activity operated by a cadre of highly skilled employees, in a specialized technical or scientific development area to ensure that a minimum capability is maintained.” The concept of core in DOD has also been associated with legislative requirements to establish core logistics capabilities in government-owned military maintenance depots. This process is based on a requirement contained in 10 U.S.C. 2464 to identify and maintain within government- owned and –operated facilities a core logistics capability including the equipment, personnel, and technical competence required to maintain weapon systems identified as necessary for national defense emergencies and contingencies. Regardless of usage, determinations of core and inherently governmental functions within DOD have often been viewed as somewhat subjective in nature. The term “core function” recently has gained increased and more expanded use within DOD, beginning with DOD’s publication of its September 2001 Quadrennial Defense Review Report, which recommended the identification of core and non-core functions. According to the report, “only those functions that must be performed by DOD should be kept by DOD. Any function that can be provided by the private sector is not a core government function.” The test to separate core and non-core functions would be to determine whether a function is directly necessary for warfighting, according to the report. Further emphasis on assessing core functions subsequently came from DOD’s Senior Executive Council, which, in April 2002, launched a departmentwide effort to distinguish between core and non-core functions with an emphasis on retaining in-house only those functions deemed core to the warfighting mission. Under this approach, it tasked the defense components with developing plans to transition non-core functions to alternative sourcing arrangements or A-76 studies, if appropriate, as soon as possible. In advocating the use of alternatives, the Senior Executive Council noted that A-76 cost comparisons were lengthy, expensive, and hard on the workforce. Examples of alternate sourcing strategies cited by the Council included public-private partnering, employee stock ownership, and quasi-governmental organizations. Details about these and other alternatives can be found at appendix I. While use of A-76 studies was still permitted, emphasis was expected to be given to identifying alternate sourcing approaches that might be used to transfer non-core functions out of the department. Much publicity to this new core emphasis surrounded Army’s efforts under its program, which it designated as “the Third Wave.” The term “Third Wave” was used to distinguish this current effort from two previous sourcing efforts under A-76, the first occurring largely in the 1980s and the second beginning in the 1996-97 time period. Unlike the earlier two waves, which focused on A-76 studies of about 25,000 and 33,000 positions respectively, the scope of the Third Wave was to be significantly larger, potentially involving over 200,000 positions. This was of significant concern to government employees after several years of A-76 study efforts within DOD. The Army’s program also received much public attention because of what Army officials have characterized as an unrelated, but parallel, effort to have a contractor (RAND) study options for rethinking governance of the Army’s arsenals and manufacturing plants. The Army has subsequently indicated it does not plan to pursue the options outlined in that study which ranged from privatization to creation of a federal government corporation to operate these facilities. On March 24, 2003, the Secretary of the Army directed that other action plans be developed to deal with these facilities. (See app. II for a summary of the actions directed.) Progress in Assessing Core Functions Has Varied Across the Defense Components and Has Been Affected Somewhat by Definitions of “Core” Progress in assessing core functions has been varied and limited across the major Defense components, and affected by somewhat ambiguous and subjective definitions of what constitutes a “core function.” These multiple and somewhat ambiguous definitions of what is a “core function” have made it difficult for the components to easily employ the core competency approach to decision-making, and some DOD components have sought additional guidance and/or applied their own criteria to identify core functions. Even so, progress in assessing core functions has varied across the components, with the Army and the Air Force having made the most progress in their efforts. In addition, the Army, which has devoted the greatest attention to assessing core functions, has found that distinguishing between core and non-core functions, by itself, has limited value because that distinction alone does not necessarily prescribe a sourcing decision. Guidance in Defining Core Has Been Broad and Additional Guidance Sought DOD guidance to define a core function under the new program emphasis has been broad and, as a result, there are multiple and somewhat ambiguous definitions of “core,” leading some DOD components to seek additional guidance. The term “core” has had different meanings depending upon the context in which it was used. Moreover, there has been and remains a significant amount of subjectivity in defining “core” as there has been with the term “inherently governmental.” Recognizing the potential difficulty in applying the core competency-based approach, the Senior Executive Council provided several definitions of “core” as well as criteria for determining core competencies in its April 2002 implementing memo. As a starting point for its core-competency emphasis, a work group commissioned by the Senior Executive Council chose a business concept outlined in a 1990 Harvard Business Review article. The article provides several examples of corporations that identified their core competencies, helping them to become more successful than their competitors. The authors likened a diversified corporation to a business tree. For example, the trunk and major limbs are core products; the smaller branches are business units. While admitting this concept is difficult to apply to DOD, the Senior Executive Council nonetheless translated that business tree to a military application—the core services were described as the set of activities that actually contribute to the value of the end product (land, sea, and air operations), the business units were the units of a component command, the end products were military effects, and the customer was the combatant commander employing forces and resources. In adapting the definition of “core” from the Harvard Business Review article to the DOD environment, the Senior Executive Council defined core as “A complex harmonization of individual technologies and ‘production’ (employment, delivery) skills that create unique military capabilities valued by the force employing !” Several additional definitions were provided in the Council’s April 2002 memo to help clarify the reader’s understanding of the definition (see app. III). According to the memo, however, there are three themes common to each definition: (1) the knowledge and experience acquired by people, (2) the discrete and finite set of technologies the people employ, and (3) the business objectives to be achieved. It stated that DOD’s business objective to be achieved is warfare. The Senior Executive Council’s memo also provided some criteria for determining core competencies. According to the Council, a core competency has potential application to a wide variety of national security needs, provides a significant contribution to the combatant commander’s would be difficult for competitors to imitate, provides the means to differentiate from competitors, crosses organizational boundaries within an enterprise, is a direct contributor to the perceived value of the service, does not diminish with use, deploys with forces, and provides training and experience that forms the basis of ethos and culture. The memo also noted that these criteria are not “pass/fail” criteria. That is, some criteria may help to identify core competencies while others may not, and that these criteria are based on business concepts that have been adapted to the military domain. Furthermore, the memo stressed the importance of senior leadership judgment in identifying core competencies. According to various officials, the lack of a clear and concise definition of the terms related to the core concept initially made it difficult for the Army and Air Force to apply the core concept to their functions. Both services have subsequently supplemented the Senior Executive Council definitions with their own internal documents and specific guidance, which are discussed in the next sections. That notwithstanding, the definition of core remains somewhat broad in nature and subjective, and will likely remain so in the future. The Navy and Marine Corps have only recently begun their efforts to identify core functions, and have not yet sought to develop additional guidance. A Defense Logistics Agency official told us they did not use any additional guidance. DOD and service officials told us that while the concepts “inherently governmental” and “core” are similar and may overlap, they may not always be the same. Specifically, not all inherently governmental functions would be considered core, nor would all core functions be designated inherently governmental. For example, according to Army analysis, many civil functions performed by the Army Corps of Engineers, such as wetlands regulation and eminent domain authority, are inherently governmental, but they are not core to the Army’s mission. Conversely, we were told, certain medical services provided by doctors and nurses in the operating forces are not deemed to be inherently governmental; however, these services are considered to be core to the Army’s mission. Progress on Identifying Core Functions Has Varied The Senior Executive Council directed the services and defense agencies to inventory their organizations and identify their core functions, but only the Army and Air Force have made much progress in doing so. The Army took the lead in pursuing this initiative and has recently completed an effort to identify its core and non-core functions. The Air Force also initiated a core competency review, which focused predominately on military positions. The Navy and Marine Corps are in the early stages of assessing their core functions. The Defense Logistics Agency broadly identified its core and non-core competencies, but has not identified specific positions as core or non-core. Army Efforts Recently Completed The Army has recently completed an effort to identify its core and non-core functions for over 200,000 positions. Initially, the Army’s Third Wave program assumed that all commercial positions were non-core and thus potential candidates for performance by the private sector or other government agencies. However, it permitted its components to request exemption from the non-core designation and, as a result, considered appeals involving numerous functional areas. Some were sustained while others were not. The results of this process differed somewhat from the Army’s initial expectations that all non-core functions could be subject to competition or alternate sourcing, and the number of positions likely to be subject to alternate sourcing is not yet clear. In permitting its components to present a case for functions to be exempt from the non-core designation, the Army provided specific guidance on the submission of exemption requests and the factors to be used to evaluate those requests. An exemption request needed to provide a compelling case that a non-core designation could pose substantial and specific risks to core warfighting missions or would violate a statutory requirement affecting a function. The Army components submitted 24 requests for exemption from non-core designation, each representing one or more broad functional areas. For example, these areas included civilian personnel, installation management, law enforcement and criminal investigations, and both military and civilian career progression activities. The Army’s authority for reviewing and approving core-competency exemption requests was the Assistant Secretary of the Army for Manpower and Reserve Affairs. In evaluating the exemption requests, the Office of Manpower and Reserve Affairs supplemented the Senior Executive Council’s definitions of core with six core competencies identified by the Army in Army Field Manual 1 and other documents. The six competencies were depicted as: Shape the security environment—provide a military presence. Prompt response—provide a broad range of land power options to shape the security environment and respond to natural or manmade crises worldwide. Forcible entry operations—provide access to contested areas worldwide. Mobilize the Army—provide the means to confront unforeseen challenges and ensure America’s security. Sustained land dominance—provide capabilities to control land and people across various types of conflicts. Support civil authorities—provide support to civil authorities in domestic and international contingencies, including homeland security. After evaluating the appeals, the Army, in some instances, sustained the exemption requests, while, in other instances, they were denied. However, in many instances a mixed decision was rendered regarding individual functions within a broad functional area. This is illustrated by the Army’s determination of core competencies for two functions—medical services and information resources. In making its decisions, Army officials determined that medical activities could be considered core in some circumstances and non-core in others. The Army also found that, in some cases, functions considered to be core—such as information resources—contained elements that were designated non-core. The Army determined that many medical functions are core to the Army’s mission even though they are not classified as inherently governmental. The Army recognizes that medical functions do not require unique military knowledge or skills or recent experience in the operating forces to be performed. However, for troops deployed in theater (i.e., a war zone), medical functions do need to be performed by in-house personnel because reliance on host nation contracting for medical support could place significant risks on the Army forces. The Army has determined that the in-theater medical mission is a critical element of the Army’s ability to accomplish its core competencies. Even so, certain functions within the medical area can be considered both core and non-core. For example, the optical fabrication function—which is the ability to produce eyewear (replacement spectacles and protective mask inserts)—is considered a core competency in support of the operational forces close to the point of need in the area of engagement. However, this same function performed in the United States is not considered to be a core competency, and the Army states that this function may be reviewed for divestiture or privatization. Within the information resources function, the Army considers the management of information resources in a network-centric, knowledge- based workforce to be a core warfighting competency. This core competency includes information operations that support operating forces, and utilizes commercial technology adapted for military applications. Organizations and personnel performing functions that ensure command, control, and communications interoperability across Army, joint, interagency, and coalition forces are core functions and need to be kept in-house. However, other information resource functions—such as help-desk services—are deemed to be non-core and can be considered for possible outsourcing. Army officials said they recognized that once the determination was made that a function was considered to be core or non-core to the Army’s mission, the sourcing of the function would, in many instances, require additional analysis to determine the amount of core capability to be kept in-house and the risk the Army might face by sourcing the function. The types of risk to be considered in evaluating impacts upon a core mission are force management, operational, future challenges, and institutional. Additional factors must also be considered. For example, the Army determined that its casualty and mortuary affairs function is not a core mission, nor is it an inherently governmental function. However, national policy dictates that Army officials notify families of a casualty in person. Overall, the Army found the results of its review were somewhat contrary to its, and the Senior Executive Council’s, initial expectation that all non-core functions should be subject to competition or alternative sourcing. As noted previously, the Army found the designation of “core” does not necessarily indicate military or government civilian performance is required or necessarily precludes competitive sourcing of the function. That is, a designation of “non-core” does not automatically mean that a function can, or should, be contracted out—other factors must also be considered. As a result, this has led to some uncertainty regarding how and to what extent the results of the Army’s core analyses will be used in sourcing decisions and this potentially has implications for other Defense components as well. While at this point, the Army is still deciding how to proceed with implementing the results of its core assessments, Army officials told us that the core decisions would be reflected in the Army’s 2003 FAIR Act inventory. Air Force Efforts Focus on Military Positions The Air Force focused its initial core competency review predominately on military positions. This was done because the Air Force wanted to identify functions performed by military personnel that might be realigned for civilian or contractor performance, thus permitting affected military personnel to be reassigned to operational areas where shortages of military personnel existed. All military positions were reviewed in terms of three main core competencies and six distinctive capabilities. The three institutional core competencies were depicted as: Developing Airmen (the heart of combat capability). Technology to Warfighting (the tools of combat capability). Integrating Operations (maximizing combat capability). Six distinctive Air Force capabilities also considered were those related to: Precision engagement—the ability to locate the objective or target, provide responsive command and control, generate the desired effect, assess the level of success, and retain the flexibility to reengage. Rapid global mobility—the ability to rapidly and flexibly respond to the full spectrum of contingencies worldwide. Information superiority—the ability to collect, control, exploit and defend information while denying the adversary the same. Agile combat support—the ability to provide combat support in a responsive, deployable, and sustainable manner. Air and space superiority—the ability to establish control over the entirety of air and space, providing freedom from attack and freedom to attack. Global attack—the ability to find, fix, and attack targets anywhere on the globe. Although the core competency review process did involve some subjective judgment, each position was classified into three basic categories—those (1) requiring military performance, (2) requiring government civilian performance, and (3) available for contractor consideration. As a result of this review, 17,800 military positions were identified for potential conversion to either government civilian or contractor civilian positions. Our prior work has identified various instances where personnel costs are generally less for civilian personnel than for military. An additional 4,477 military positions were identified for possible future realignment through other reengineering efforts, such as adjusting the manpower requirements process and conducting a business case analysis for alternative installation support practices, for a total of 22,277 military positions. Because many of the functions reviewed involved both military and civilian personnel, an additional 8,900 Air Force civilian positions were identified for possible conversion to contractor performance. An Air Force official stated that the service hopes to do a more in-depth review on the civilian side in the future; however, at the moment, none is planned. The Air Force expects the number of positions that can be competed in its FAIR Act inventory will be increased as a result of this review. In the near-term, as a direct result of the core function review, the Air Force has indicated it plans to outsource a significant portion of the workload of its Pentagon Communications Agency currently performed by over 400 military personnel. Although Air Force officials indicated the service has the resources to implement this action, other efforts may have to be postponed until the funds are available. To move military positions to operational warfighting positions, additional government civilian or contractor personnel would be needed to replace the military personnel. Air Force officials told us that moving the military personnel out of non-core functions is a high priority, but because of the high cost involved in adding funds to the operations and maintenance appropriation account to pay for replacement civilian or contractor positions, it is currently an unfunded priority. They recently estimated this additional cost to be about $5 billion over the next 5 years. Moreover, in its internal budget planning documents for fiscal year 2004, the Air Force stated that its number one unfunded priority is funding ($2.34 billion) for moving the initial 6,300 military positions out of non-core functions. As a result, it is not yet clear to what extent larger number of conversions would take place and the extent to which they might involve direct conversions or be done as part of public-private competitions using the A-76 process. Other DOD Component Efforts Are Not as Advanced As mentioned earlier, the Marine Corps has recently begun its effort to identify core functions and has convened a working group to determine how to proceed. The Secretary of the Navy tasked the Navy components to determine their core competencies on April 18, 2003, so this effort is still in its infancy. The Defense Logistics Agency has identified four core competencies—customer knowledge, integrated combat logistics solutions, rapid worldwide response, and single face to industry and customers. In addition, it identified 10 non-core competencies. These are: base operations; warehousing services; transportation services; document automation, printing and production services; marketing of unneeded materiel; computer application software; computer operations and database management support; cataloging; payroll services; and civilian personnel services. However, it has not determined which positions are considered to be core. Some Progress Made in Identifying Alternative Sourcing Arrangements, but the Extent to Which Alternatives Are Likely to Be Used Is Unclear The range of alternatives to A-76 likely to be pursued under the core competency-based approach is not yet clear given limitations in the core analyses, but DOD has made some progress toward identifying and/or using some sourcing arrangements that are alternatives to A-76. Some were identified as part of an initiative to identify alternatives through the use of pilot projects, and a few others have been identified by the services as they have focused on the core initiative. At the same time, some DOD officials indicated that the use of some alternatives could be limited without special legislative authorities and/or repeal of various existing prohibitions. The use of alternative sourcing could also be affected by the emphasis on A-76 competitions and OMB’s goals for the department. Alternate Sourcing Approaches Identified through Pilot Projects and Other Initiatives DOD has made some progress in identifying and using sourcing arrangements that are alternatives to A-76, including some as part of an initiative to identify alternatives through use of pilot projects, and a few others that have been identified by the services as they have focused on the core initiative. These projects are in various stages of implementation. DOD’s Senior Executive Council and Business Initiative Council asked the components to identify and submit at least one pilot or “pioneer” project to provide alternative sourcing methods for widespread implementation. Ten projects were approved by the Business Initiative Council and were then submitted to OMB for approval. OMB approved eight projects in August 2002. The department later withdrew two projects because the timing was not appropriate. The following table provides a listing of the 10 Pioneer Projects. (A description of the ongoing pioneer projects can be found in app. IV.) The projects propose to use a variety of alternatives, including partnering and divestiture, and are in varying stages of implementation, as noted in appendix IV. For example, the Army previously developed a partnership with the city of Monterey, California, to provide municipal services needed for the operation of DOD assets in Monterey County. Because of the success of this project, the Army submitted legislation to Congress that would allow contracting for municipal services defense-wide. In another example, the Navy has identified optical (eyewear) fabrication as a potential candidate for divestiture, because that service is readily available in the private sector. However, this project is still in the conceptual phase and no decision will be made until a thorough analysis has been completed to determine the most appropriate sourcing method. DOD was required to go to OMB for approval of these Pioneer Projects to determine if they would count toward the competitive sourcing goals set by OMB. The criteria for OMB approval required that projects involve an element of divestiture, competition, or the transfer of responsibility to other private or public sector performers. The two pilot Pioneer Projects that were not approved by OMB had proposed using reengineering or the development of most efficient organizations as an alternative to A-76 competition. These two projects were not approved because they neither involved the divestiture of responsibility for performing the function nor contained a near-term element of competition. DOD officials withdrew two others because they believed timing was not appropriate for those actions. In responding to OMB’s draft of its most recent revision to Circular A-76, we stressed the importance of considering alternative approaches to accomplishing agency missions. Such approaches encompass a wide range of options, including restructuring, privatizing, transferring functions to state and local governments, terminating obsolete functions, and creating public-private partnerships. Given that these options can result in improved efficiency and enhanced performance, we recommended at that time that OMB continue to encourage agencies to consider these and other alternatives to A-76 competition. The revised circular allows agencies to deviate from certain requirements of the circular with prior written approval from OMB. For example, agencies are permitted to explore innovative alternatives, including public-private partnerships, public-public partnerships, and high performing organizations, with prior written approval from OMB for a specific competition. In addition to these Pioneer Projects, some other initiatives to use an alternate sourcing approach have emerged within the military services. For example, the department plans to transfer its personnel security investigations function, now performed by the Defense Security Service to the Office of Personnel Management. In another instance, the Secretary of the Army recently determined that the long-term incarceration of prisoners was not a core competency of the Army. The department is in the process of finalizing plans for transferring its military-dedicated prison at Fort Leavenworth, Kansas, to the Federal Bureau of Prisons. Although exact savings from this transfer have not yet been determined, an Army official stated that transferring the facility to the Bureau of Prisons would free up almost 500 military positions. In addition, Army officials believe it will allow for efficiency gains because the cost to incarcerate a prisoner per year by the Bureau of Prisons is expected to be less than half what it costs the Army to do so. Potential Limitations on Use of Alternatives Exist The services have been charged by the Senior Executive Council to identify and use sourcing arrangement alternatives to A-76 for their non-core functions; however, DOD and the services have encountered potential limitations to their efforts. These include legislative impediments and the requirement to support the President’s Management Agenda to meet the competitive sourcing goals of OMB. Legislation Can Limit Use of Alternatives Various officials in the Office of the Secretary of Defense and the services expressed uncertainty over the extent to which existing legislative prohibitions or the lack of legislative authority could limit the pursuit of some alternatives. They noted existing prohibitions such as those contained in 10 U.S.C. § 2461, and section 8014 of the annual appropriations acts that require public-private competition in all but a few circumstances. In citing areas where legislation might be needed, they noted that to complete the planned transfer of the personnel security investigative functions to the Office of Personnel Management, DOD recently submitted a legislative request to Congress seeking authority to do so as part of its legislative package known as the Defense Transformation for the 21st Century Act of 2003. Specifically, the legislation would allow DOD to transfer this non-core function to the Office of Personnel Management, which would allow for consolidation of requests for security clearances under this agency. Alternatively, Army officials told us that in the initiative to transfer its Fort Leavenworth prison to the Federal Bureau of Prisons, they did not believe special authorizing legislation is required. They believe DOD is not required, by statute, to maintain prisoners in DOD facilities and may use any facility under the control of the U.S. government. DOD officials have also requested some legislative relief to implement some initiatives that they have already identified. For example, DOD has requested the repeal of 10 U.S.C. § 2465 to allow the department to bid and compete contracts for security guard services and for the performance of firefighting functions at military installations in the continental United States. DOD believes such contracts would be cost-effective and provide a needed flexibility in exigent situations, such as September 11, 2001. In another case, DOD has sought legislative authority to contract directly with local governments for municipal services based on the success of its Pioneer Project in Monterey, California. Doing so would allow DOD components to use this type of arrangement at other locations, as appropriate. Supporting the President’s Management Agenda May Limit Use of Alternatives The department, in attempting to meet OMB’s goals to conduct A-76 competitions, is unlikely to pursue alternative sourcing on a large scale. One of the five governmentwide initiatives in the President’s Management Agenda is competitive sourcing. Under this initiative, OMB has directed agencies to compete 15 percent of positions deemed commercial in their fiscal year 2000 FAIR Act inventories by the end of fiscal year 2003, with the ultimate goal of 50 percent by the end of fiscal year 2008. For DOD, this represents approximately 226,000 positions. Although OMB has recently allowed some alternative sourcing methods that contain an element of competition to be counted toward meeting these goals, DOD expects that the vast majority of positions will be competed under A-76 competitions. Positions competed under A-76, of course, would not be available for consideration for alternative sourcing methods. While the department initially placed a priority on identifying alternative sourcing arrangements, the most recent department guidance is less clear regarding the priority of alternate sourcing arrangements over A-76 competitions. The Business Initiative Council recently directed the defense components to submit the status of their core competency reviews and detailed competitive sourcing plans—including both A-76 and alternatives to A-76—by June 2, 2003. The Business Executive Council will review these plans in preparation for the fiscal 2005-2009 preliminary budget review. Details on these plans were not available at the time we completed our review. DOD Expected to Maintain an Active A-76 Competitive Sourcing Program Limited progress in implementing the core competency-based approach, coupled with OMB’s emphasis on the use of A-76 in conjunction with the President’s Management Agenda, suggest that the use of A-76 may remain a key vehicle for sourcing decisions involving non-core and non-inherently governmental functions. Nonetheless, despite its experience in implementing competitive sourcing, the department faces a number of challenges related to its A-76 program. OMB Has Established Ambitious A-76 Program Goals for DOD OMB has established ambitious A-76 competitive sourcing program goals for the department to meet in both the short term and the long term, even while DOD is focusing on its core competency approach. The department’s A-76 goals for the number of positions to be studied and the time frames for accomplishing those studies have varied over time, reaching a high in 1999 of studying 229,000 positions between 1997 and 2005. However, DOD experienced difficulty in identifying eligible functions for study and consequently, in 2001, reduced the goal to study 160,000 positions between 1997 and 2007. Recently, DOD’s study goals have increased because of OMB’s competitive sourcing goals. To meet OMB’s goal of directly converting or studying 15 percent of the 453,000 commercial activity positions identified in the 2000 FAIR Act inventories by the end of fiscal year 2003, DOD would need to complete A-76 studies on about 68,000 positions between fiscal year 2000 and the end of fiscal year 2003. Then, to meet the larger goal of 50 percent, DOD would need to study an additional 158,000 positions in the out years (fiscal years 2004-08). This represents a total of 226,000 positions to be studied, far more than DOD has been able to complete in a similar time period. Figure 1 illustrates OMB’s goals for DOD compared to what DOD has completed at the end of fiscal year 2002. The strength of DOD’s A-76 program is shown in the number of positions announced or planned for study, those completed, and those still ongoing. Table 2 provides data on the number of positions the department has announced for study under its A-76 program since its resurgence in 1997. The number of positions planned for study by year for each component for fiscal years 2003-08 was not available, but it would seem to require much greater numbers of announcements per year than were made in recent years. The services are currently determining the number of positions they plan to study in future years, including the number of military and civilian positions to be studied, and are required to submit preliminary data to the Office of the Secretary of Defense by June 2, 2003. However, as noted, the total number of positions that would be required to be studied for fiscal years 2004-08 to meet OMB’s target for DOD is a total of 158,000 positions. Table 3 shows the number of positions completed in A-76 studies since 1997. Of the total, 48,921 were civilian positions and 19,336 were military positions. Table 4 shows the number of positions being reviewed in ongoing A-76 studies. Of the total, 23,766 are civilian positions and the remaining 2,622 are military positions. As shown in table 3 above, DOD had already studied over 68,000 positions through fiscal year 2002, although OMB did not count approximately 14,000 positions contained in A-76 studies completed during fiscal years 1997-99 toward the 15-percent goal because the positions studied were not derived from DOD’s 2000 FAIR Act inventory. Nonetheless, OMB permitted use of nearly 54,000 of the positions for which DOD subsequently completed studies, leaving the department approximately 14,000 positions to study by the end of fiscal year 2003. DOD recently reported that it has met its 15-percent goal by completing competitions in excess of 71,000 positions between October 1,1999, through June 1, 2003. DOD hopes to reach agreement with OMB to meet its additional 158,000-position study requirement through a combination of A-76 studies and alternatives to A-76, and change the period of study from fiscal years 2004-08 to fiscal years 2005-09. Regardless, this longer-term goal could be a challenge, requiring completion of a significantly larger number of positions for study than has actually been completed in similar periods in the past. For example, between fiscal years 1997 and 2002, DOD completed competition studies for about 68,000 positions. Under the new goals, DOD would be required to complete studies involving 158,000 positions during a 5-year period between fiscal years 2004-08. This is more than double what DOD has been able to complete in the past during a similar time frame. DOD Faces Other Challenges in Meeting A-76 Goals In addition to size of effort required to meet OMB’s out-year study goals, DOD faces a number of challenges in meeting OMB’s A-76 program goals. As we have tracked DOD’s progress in implementing its A-76 program since the mid- to late-1990s, we have identified various challenges and concerns that have surrounded the program. We believe those challenges and concerns are still relevant to the department’s current A-76 program. They include (1) the time required to complete the studies, (2) the cost and other resources required to conduct and implement the studies, and (3) the selection and grouping of positions to compete. In addition, as noted earlier, the Army’s core competency review has shown that the designation of “core” does not necessarily mean that in-house employees should perform a function, nor does the designation of “non-core” mean a function should necessarily be considered for alternative sourcing or A-76 competitions. This may cause further difficulties in selecting and grouping functions for A-76 reviews or other sourcing alternatives. OMB’s revised A-76 circular states that standard competitions shall not exceed 12 months from public announcement (start date) to performance decision (end date). Under certain conditions, a time limit waiver of no more than 6 months can be granted. The revised circular also states that agencies shall complete certain preliminary planning—such as scope, baseline costs, and schedule—before public announcement. Even so DOD’s studies have historically taken significantly longer than 12-18 months. DOD’s most recent data indicate that the studies take on average 20 months for single-function studies and 35 months for multifunction studies. It is not clear how much of this time was needed for planning that will now be outside the revised circular’s study time frame. Once DOD components found that the studies were taking longer than initially projected, they realized that a greater investment of resources would be needed than originally planned to conduct the studies. We previously reported that the President’s 2001 budget showed a wide range of projected study costs, from about $1,300 per position studied in the Army to about $3,700 in the Navy. DOD is now estimating costs at $3,000 per position for new studies beginning in fiscal year 2004. However, the much larger number of studies required to be completed in the out-years to meet OMB’s study goals could require DOD components to devote much greater total resources to this effort than in the past. In addition, DOD components, particularly the Air Force, are attempting to shift military personnel away from commercial type functions to those more directly related to warfighting. As noted above, because these functions are not being eliminated, new operations and maintenance account funds will have to be provided to pay for the additional civilians or contractors that perform the function(s) currently being performed by uniformed personnel. As previously mentioned in the report, the Air Force alone has recently estimated this additional cost to be about $5 billion over the next 5 years. This is an issue other services have also encountered in the past and will in the future as they plan to shift military personnel away from commercial positions into warfighting positions, either as a result of its core assessment or as part of its A-76 studies. We have not seen precise, reliable figures on the extent to which these conversions may occur, and the extent to which all affected military personnel would be needed in warfighting positions. In the past we identified instances where service components were required to absorb these costs without additional resources. We recommended in our 2000 report that the Secretary of Defense take steps to ensure that the services increase funding for operation and maintenance accounts, as necessary, to fund the civilian and contractor personnel replacing military positions that have been transferred to meet other needs. The department acknowledged that this practice would require the services to program additional funding for operation and maintenance accounts, viewing this as a service investment decision. However, given the increased emphasis the department has placed on moving the military from commercial functions to warfare, officials from the Army and the Air Force have expressed concern that there were not adequate funds to replace the military with civilian or contractor personnel once their positions have been competed or transferred. This can have the effect of either limiting the number of conversions that can be made or requiring Defense components to absorb the costs within their existing budgets, creating limitations in other program areas. As we have previously reported, selecting and grouping functions and positions to compete can also be difficult. Some functions may be spread across different geographic locations or may fulfill a roll that blurs the distinction between “commercial” and “inherently governmental,” thus preventing the packaging of some commercial positions into suitable groups for competition. In addition, as previously noted, DOD excluded certain commercial functions in its FAIR Act inventories from competition. DOD’s fiscal year 2002 FAIR Act inventory exempted 171,698 positions from competition because of statutory, national security, or operational concerns. Further, as we have previously reported, most services have already faced growing difficulties in finding enough study candidates to meet their A-76 study goals. Finally, use of alternatives under the core-competency approach could also limit positions available for A-76 study. Conclusions Progress varies among DOD components in assessing core competencies and identifying and pursuing alternative sourcing strategies. Even so, some limitations have been identified which indicate that, contrary to some initial expectations, the determination of whether a function is core by itself will not automatically lead to a sourcing decision because, as the Army has discovered, other factors can also affect sourcing decisions. Clarification of the department’s expectations for sourcing decisions is needed along with additional guidance on other factors that may need to be considered in sourcing decisions. Otherwise, the components may be left with unrealistic expectations on making sourcing decisions or they may make changes in sourcing that later prove to be problematic. Under the core-competency process, the Air Force identified large numbers of military personnel who could be reassigned to meet other military requirements and be replaced by civilian or contractor personnel who may be a more economical alternative. However, to accomplish this reassignment, Air Force officials stated that it would need to find funds for replacement personnel out of operations and maintenance accounts. This is indicative of what other services are likely to face in seeking to accomplish such conversions—the need for additional funding in operations and maintenance accounts to support these conversions. Such conversions may be a more cost-effective alternative than simply increasing military end-strength where shortages exist in military positions. However, decisions to replace military personnel with civilians or contractors without identifying sources for increases in operations and maintenance funds to support those decisions could stress the ability of the operations and maintenance account to meet other pressing needs. Recommendations We recommend that the Secretary of Defense, through the Senior Executive Council, clarify its expectations for DOD components in making sourcing decisions based on core competency assessment results and provide additional guidance identifying the range of additional factors to be considered once the determination is made that a function is not considered core. We also recommend that the Secretary of Defense require DOD components to ensure that decisions to convert functions performed by military personnel to performance by civilians or contractors are predicated on having clearly identified sources of funding to support those decisions. Agency Comments and Our Evaluation The Principal Assistant Deputy Under Secretary of Defense (Installations and Environment) provided written comments on a draft of this report. The department generally concurred with our recommendations. With respect to our first recommendation, the department agreed that, in addition to the determination of core competency, there are additional steps necessary to making effective sourcing decisions. However, the response did not indicate what specific guidance, if any, would be provided to clarify and assist the components in making sourcing determinations. Instead, the department suggested that core assessments would be used as input to the Inherently Governmental Commercial Activities Inventory and that the department’s guidance on how to prepare these inventories will be continually refined to help the sourcing decision process. To the extent the department continues to emphasize core competency assessments and alternatives to A-76 competitions in making sourcing decisions, we still believe that additional guidance is needed to assist components on factors other than the designation of core or non-core that need to be considered when making a souring decision. With respect to the second recommendation, the department agreed that the identification of adequate resources is a critical factor in meeting its competitive sourcing goals and, consequently, the response ensures that they will be properly funded. The department also provided a number of technical comments, which we incorporated into the report, where appropriate. The department’s comments are reprinted in their entirety in appendix V. Scope and Methodology As requested by the Ranking Minority Member of the House Committee on Armed Services, Subcommittee on Readiness, we reviewed DOD’s plans for sourcing non-core functions and the effect this may have on its A-76 program. Specifically, the objectives of this report were to assess (1) the department’s progress in assessing its core functions as a basis for sourcing decisions, (2) the plans and progress DOD has made in identifying and implementing alternatives to A-76, and (3) the current status of DOD’s A-76 program. To evaluate the department’s progress in assessing its core functions as a basis for sourcing decisions, we met with responsible officials from the Senior Executive Council, the Business Initiative Council, and the Office of the Secretary of Defense to identify plans and guidance for this initiative. We also met with officials from the Army, the Air Force, the Navy, the Marine Corps, and the Defense Logistics Agency to identify their implementation plans, guidance, and analyzed available data to assess progress being made. Our work was conducted in the Washington, D.C., metropolitan area. To evaluate the plans and progress DOD has made in identifying and implementing alternatives to A-76, we met with officials in organizations identified above and obtained and analyzed relevant documentation pertaining to alternatives identified. Additionally, we spoke with representatives from the Defense Contract Management Agency and the Defense Finance and Accounting Service about their Pioneer projects. Likewise, to assess the status of DOD’s A-76 program, we met with cognizant officials within DOD and its key components to update information we had previously obtained in other recent studies in this area concerning studies planned and completed and we updated information we had previously obtained regarding challenges associated with this program. Data on the number of A-76 competitions used in this report were based on DOD’s Commercial Activities Management Information System (CAMIS) Web-based system. Because the numbers change daily, what we reported are the precise figures in the database at the specified point in time. We have previously identified limitations in accuracy and completeness of data included in this system, which limit the precision of information included in the system. Since then, the department has made changes to improve the accuracy of data in the system, and the database remains the principal source of aggregate information on studies underway and completed. However, we did not audit the accuracy of the numbers in the database. We conducted our review from October 2002 to May 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me on (202) 512-8412 or holmanb@gao.gov. Other contacts and key contributors to this report are listed in appendix VI. Appendix I: Alternatives to A-76 for Sourcing Non-Core Competencies In its April 2002 memo, the Senior Executive Council noted that “there are a number of imaginative alternatives to DOD ownership of Non-Core competencies.” The memo provided detailed information on six specific alternatives—employee stock ownership plans, transitional benefit corporations, negotiation with private sector, city-base partnership, strategic partnering, and quasi-government corporations. Following is a description of the concept, an example of usage within the government, and recommended Internet sites for each alternative, based on the Senior Executive Council memo. Employee Stock Ownership Plans (ESOP) Concept: Mechanism used to spin off existing government activities to form an employee-owned company. Description: The ESOP gives federal workers the ability to control their own destiny and obtain a stake in the successful outcome of a new business. ESOP is a contribution benefit plan that buys and holds company stock. Shares in the trust are allocated to individual employee accounts. While many privatizations result in layoffs and disruptions, ESOPs save jobs, retain critical skills, and provide seamless customer service to federal agencies. Where Used Previously: U.S. Investigative Services (1995) Transitional Benefit Corporations (TBC) Concept: Umbrella organization created to facilitate smooth transition of government employees. Description: The TBC is designed to transition employees to the private sector while maintaining their federal benefits. Normally, a transition period is established where the government continues to pay for the benefits and then the new private company will eventually pay for those benefits back through the federal government. In addition, the TBC can contract with the private sector and partner with other governmental, private sector, educational or not-for-profit entities. It maintains core capabilities, preserves expertise of key personnel, finds a “soft landing” for underutilized workers, creates business environment for new growth, and provides a new business model for the government. Negotiation with Private Sector (i.e., transfer workforce to the private sector as part of a contract negotiation) Concept: Negotiated transfer of government workforce to a private company. Description: Negotiate with the private sector in the outsourcing of a government function to the private sector. However, the government negotiates to have the workers who performed the function be hired by the contractor. The goal is to get the employees comparable pay, at the same location (for an agreed upon minimum time period), and a matched retirement plan. It offers stability that a normal A-76 cost comparison study does not provide. City-Base Partnership Concept: Transforming a military installation to city-owned property with military, public, non-profit, and commercial tenants occupying and leasing facilities. Description: City Base is transforming a former military installation to city-owned property with military, public, non-profit, and commercial tenants occupying and leasing facilities. The service conveys the installation to the city and then leases back the facilities needed for mission operations. The city may contract with a third party to manage and develop the property. Where Used Previously: Brooks Air Force Base and the City of San Antonio, Texas. The Air Force created the Brooks City-Base Partnership with the city of San Antonio as a means to reduce Air Force base operating and personnel cost and to promote public-public and public-private partnerships. Special authorizing legislation in 1999 and 2000 allowed such partnership in which the Air Force transferred real property to San Antonio in July 2002 in exchange for a leaseback of facilities and for the city to provide municipal services such as fire protection and law enforcement. Also, the Army has implemented a similar type of partnership with the city of Monterey, California. Strategic Partnering Concept: Similar to negotiating with the private sector, this establishes a government-industry partnership and leverages the expertise of the commercial marketplace. Description: Strategic partnering moves a function and employees away from the government. The function is not given to a private corporation but is “taken over” by the employees. However, the employees do not form a stand-alone corporation, but instead, a partnership with the private company. It is used when an organization has many of the necessary elements for operating as a private company, but does not have the complete framework necessary to operate as a stand-alone corporation (payroll, benefits programs, taxes, marketing, and business development). A strategic partnership allows the employees to partner with an entity that already has these systems and procedures in place. Such partnering arrangements could be made with a private firm, joint venture, or a non-profit organization. Quasi-Government Corporations Concept: Publicly owned, common stock corporation, chartered by Congress and provided a marketplace niche in which to accomplish some public good. They can be monopolies (e.g., the U.S. Postal Service) or competitors (e.g., Fannie Mae and Freddie Mac). Description: Quasi-government corporations are an alternative similar to the non-profit corporation. The principal difference is that it is established by a government agency in order to serve a governmental purpose, rather than being established by private individual firms. The employees are not federal civil servants and do not participate in the federal retirement or other federal employee benefit systems. The advantages are that they can operate more flexibly than a government agency and they are not required to comply with all of the federal personnel rules and acquisition regulations. Appendix II: Army’s Plans for Transforming Its In-House Industrial Facilities In 2002, the Army’s “Third Wave” initiative received much public attention because of what Army officials have characterized as an unrelated, but parallel effort underway whereby RAND, under contract to the Army, was studying alternatives for rightsizing the Army’s government-owned ammunition manufacturing facilities and two arsenals that manufacture ordnance materiel—facilities that overall had been recognized as having declining workloads, excess capacity, and high operating costs. Although RAND had studied various options, such as privatization and creation of a federal government corporation, the Army decided in March 2003 not to pursue the options outlined in what was then a draft RAND report. Instead, in a March 24, 2003 memorandum to the Commanding General, U.S. Army Materiel Command (AMC), the Secretary of the Army directed the following actions to transform the Army owned portion of its defense industrial base to include ammunition facilities, manufacturing arsenals, and also its maintenance depots: AMC was directed to develop a written concept for consolidation, divestiture, or leasing, as appropriate, of the government-owned/government-operated and government- owned/contractor-operated ammunition facilities. AMC was directed to continue to work towards reducing government- owned and operated manufacturing arsenal plant capacity and develop internal efficiency measures for facilities responsible for ground-based systems. AMC was directed to use existing legal authority to form and maintain partnerships between government-owned and operated maintenance depots and the private sector, and implement initiatives to improve efficiencies, optimize utilization, and upgrade the core capabilities required to meet current and future requirements. Appendix III: Senior Executive Council Definitions of Core Competency In attempting to define core competency in a defense environment, the Senior Executive Council defined core as “A complex harmonization of individual technologies and ‘production’ (employment, delivery) skills that create unique military capabilities valued by the force employing CINC!” The Council provided the following additional definitions to help in the understanding of core: Proficiency in the coordination of human activity and employment of technology and technical systems to conduct military operations called for by a CINC. A complex integration of human knowledge and skills with the technologies of warfare to accomplish a military objective of value to a commander. It’s what we do better than anyone else to produce specific effects desired by a CINC. The essence of what we provide in world-class warfighting and related unique capabilities—through a synergistic combination of knowledge, technologies, and people—to produce desired effects for CINCs. The deep commitment of people, using technologies and delivering capabilities to meet a desired effect in support of national objectives. A synergistic employment of individual and organizational knowledge, technologies, and capabilities producing world-class services (military operations) to deliver a desired effect to a CINC. Appendix IV: Pioneer Projects In support of the Senior Executive and Business Initiative Councils’ direction to identify alternative approaches to A-76 for selected non-core competencies, the services and Defense agencies identified 10 pilot “pioneer” projects. All 10 were approved by the Business Initiative Council and presented to the Office of Management and Budget. Eight of the projects were approved by OMB to be counted toward DOD’s FAIR Act inventory goal. OMB endorsed the pioneer projects whose techniques were waivers to A-76, new requirements, direct service contract, and divestiture, but disapproved the projects that proposed reengineering as their technique. Subsequently, DOD withdrew 2 projects, leaving 6 pilot projects for implementation. A brief description of those projects and their current status is provided below. Department of the Navy: Ophthalmic Services Description: Optical fabrication involves eyewear component production and assembly and is performed at about 37 locations within and outside of the United States, employing personnel in the Departments of the Navy and Army. The Department of the Navy has the lead responsibility for this pioneer project and is now starting its analysis of this divestiture proposal. It anticipates that the analysis will take approximately 6 to 18 months to complete. A final decision regarding the optical fabrication divestiture will be made after the completion of the analysis. Department of the Air Force: Brooks City-Base Description: The Brooks City-Base Partnership involves a partnership between the Air Force and the city of San Antonio for which the Congress passed special authorizing legislation in 1999 and 2000. This divestiture was a way to reduce Air Force base operating and personnel cost and build public-public and public-private partnerships. As part of this effort, the Air Force transferred Brooks Air Force Base’s real property to San Antonio in July 2002 in exchange for a leaseback of facilities and for the city to provide municipal services such as fire protection, law enforcement, custodial and landscaping. Also, as part of this partnering arrangement, the city of San Antonio will provide the Air Force a share of the revenues generated from the contracts and developments resulting from the land and facilities transferred. Positions Affected: Approximately 100 civilian and 40 military Status: Ongoing. Department of the Army: Municipal Services Partnership for Base Support Description: According to its current arrangement with the city of Monterey, California, the Department of the Army proposed the Municipal Services Partnership for Base Support as its pioneer project. The Army is seeking legislative authority for all components within the department to be able to contract directly with local governments for municipal services such as public works and utility. Alternative: Direct Service Contract Positions Affected: Approximately 500 civilian employees (depending upon the number of installations selected for this type of contract). Status: Enabling legislation has been submitted to Congress for consideration as part of the fiscal year 2004 authorization process. The Army is conducting business case analyses for additional installation selection in the event the legislation is approved. However, as of May 2003, this proposal was not included in either the House or Senate approved versions of the bill. Defense Logistics Agency: Metalworking Machinery Repair/Rebuild Services Description: The Defense Logistics Agency (DLA) is proposing that the repair and rebuilding of depot-level industrial plant equipment by in-house personnel at the Defense Supply Center Richmond’s facility in Mechanicsburg, Pennsylvania, be subject to direct conversion through an A-76 waiver in accordance with the Office of Management and Budget Circular A-76’s Revised Supplement Handbook, part I, chapter I, section E. Alternative: Waiver to A-76 Full Cost Comparison Study Positions Affected: Approximately 82 civilians Status: DOD assessed the applicability of OMB Circular A-76 to this function and determined that the Mechanicsburg facility is a depot level maintenance and repair operation and is therefore exempt from OMB Circular A-76. Defense Contract Management Agency: Reengineer Existing Information Technology Structure Description: The Defense Contract Management Agency plans to use a streamlined A-76 approach to compete information technology functions such as desk side support, district offices’ information technology operations, and automated application testing. The streamlined A-76 approach will allow the Defense Contract Management Agency to directly compare its costs for these types of functions with those of contractors on the General Services Administration’s schedules. Also, it will shorten the time for completing the A-76 process. Positions Affected: 450 positions reviewed, approximately 250 positions affected Status: Streamlined A-76 effort is scheduled to start January 2004 with anticipated implementation of the most efficient organization and/or contracts by fiscal year 2005. Defense Finance and Accounting Service: Desktop Management Services Description: The Defense Finance and Accounting Service (DFAS) is proposing to acquire computer management services from a commercial source. As part of this effort, DFAS plans to use a performance-based service contract to obtain desktop hardware, software, and support services. Positions Affected: Approximately 125 civilians Status: DFAS notified Congress of this proposal and its plans to assess desktop management services. DFAS has completed its desktop management business case assessment and its announcement regarding that decision is imminent. Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the names above, Debra McKinney, Nancy Lively, R.K. Wild, Daniel Kostecka, and Kenneth Patton also made significant contributions to this report. Related GAO Products Sourcing and Acquisition: Challenges Facing the Department of Defense. GAO-03-574T. Washington, D.C.: March 19, 2003. Proposed Revisions to OMB Circular A-76. GAO-03-391R. Washington, D.C.: January 16, 2003. Defense Management: New Management Reform Program Still Evolving. GAO-03-58. Washington, D.C.: December 12, 2002. Commercial Activities Panel: Improving the Sourcing Decisions of the Federal Government. GAO-02-847T. Washington, D.C.: September 27, 2002. Commercial Activities Panel: Improving the Sourcing Decisions of the Federal Government. GAO-02-866T. Washington, D.C.: June 26, 2002. Competitive Sourcing: Challenges in Expanding A-76 Governmentwide. GAO-02-498T. Washington, D.C.: March 6, 2002. DOD Competitive Sourcing: A-76 Program Has Been Augmented by Broader Reinvention Options. GAO-01-907T. Washington, D.C.: June 28, 2001. DOD Competitive Sourcing: Effects of A-76 Studies on Federal Employees’ Employment, Pay, and Benefits Vary. GAO-01-388. Washington, D.C.: March 16, 2001. DOD Competitive Sourcing: Results of A-76 Studies Over the Past 5 Years. GAO-01-20. Washington, D.C.: December 7, 2000. DOD Competitive Sourcing: More Consistency Needed in Identifying Commercial Activities. GAO/NSIAD-00-198. Washington, D.C.: August 11, 2000. DOD Competitive Sourcing: Savings Are Occurring, but Actions Are Needed to Improve Accuracy of Savings Estimates. GAO/NSIAD-00-107. Washington, D.C.: August 8, 2000. DOD Competitive Sourcing: Some Progress, but Continuing Challenges Remain in Meeting Program Goals. GAO/NSIAD-00-106. Washington, D.C.: August 8, 2000. Competitive Contracting: The Understandability of FAIR Act Inventories Was Limited. GAO/GGD-00-68. Washington, D.C.: April 14, 2000. DOD Competitive Sourcing: Potential Impact on Emergency Response Operations at Chemical Storage Facilities Is Minimal. GAO/NSIAD-00-88. Washington, D.C.: March 28, 2000. DOD Competitive Sourcing: Plan Needed to Mitigate Risks in Army Logistics Modernization Program. GAO/NSIAD-00-19. Washington, D.C.: October 4, 1999. DOD Competitive Sourcing: Air Force Reserve Command A-76 Competitions. GAO/NSIAD-99-235R. Washington, D.C.: September 13, 1999. DOD Competitive Sourcing: Lessons Learned System Could Enhance A-76 Study Process. GAO/NSIAD-99-152. Washington, D.C.: July 21, 1999. Defense Reform Initiative: Organization, Status, and Challenges. GAO/NSIAD-99-87. Washington, D.C.: April 21, 1999. Quadrennial Defense Review: Status of Efforts to Implement Personnel Reductions in the Army Materiel Command. GAO/NSIAD-99-123. Washington, D.C.: March 31, 1999. Defense Reform Initiative: Progress, Opportunities, and Challenges. GAO/T-NSIAD-99-95. Washington, D.C.: March. 2, 1999. Force Structure: A-76 Not Applicable to Air Force 38th Engineering Installation Wing Plan. GAO/NSIAD-99-73. Washington, D.C.: February 26, 1999. Future Years Defense Program: How Savings From Reform Initiatives Affect DOD’s 1999-2003 Program. GAO/NSIAD-99-66. Washington, D.C.: February 25, 1999. DOD Competitive Sourcing: Results of Recent Competitions. GAO/NSIAD-99-44. Washington, D.C.: February 23, 1999. DOD Competitive Sourcing: Questions About Goals, Pace, and Risks of Key Reform Initiative. GAO/NSIAD-99-46. Washington, D.C.: February 22, 1999. OMB Circular A-76: Oversight and Implementation Issues. GAO/T-GGD-98-146. Washington, D.C.: June 4, 1998. Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved. GAO/NSIAD-98-100. Washington, D.C.: April 30, 1998. Competitive Contracting: Information Related to the Redrafts of the Freedom From Government Competition Act. GAO/GGD/NSIAD-98-167R. Washington, D.C.: April 27, 1998. Defense Outsourcing: Impact on Navy Sea-Shore Rotations. GAO/NSIAD-98-107. Washington, D.C.: April 21, 1998. Defense Infrastructure: Challenges Facing DOD in Implementing Defense Reform Initiatives. GAO/T-NSIAD-98-115. Washington, D.C.: March 18, 1998. Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives. GAO/T-NSIAD/AIMD-98-122. Washington, D.C.: March 13, 1998. Base Operations: DOD’s Use of Single Contracts for Multiple Support Services. GAO/NSIAD-98-82. Washington, D.C.: February 27, 1998. Defense Outsourcing: Better Data Needed to Support Overhead Rates for A-76 Studies. GAO/NSIAD-98-62. Washington, D.C.: February 27, 1998. Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated. GAO/NSIAD-98-48. Washington, D.C.: December 8, 1997. Financial Management: Outsourcing of Finance and Accounting Functions. GAO/AIMD/NSIAD-98-43. Washington, D.C.: October 17, 1997. Base Operations: Contracting for Firefighters and Security Guards. GAO/NSIAD-97-200BR. Washington, D.C.: September 12, 1997. Terms Related to Privatization Activities and Processes. GAO/GGD-97-121. Washington, D.C.: July 1, 1997. Defense Outsourcing: Challenges Facing DOD as It Attempts to Save Billions in Infrastructure Costs. GAO/T-NSIAD-97-110. Washington, D.C.: March 12, 1997. Base Operations: Challenges Confronting DOD as It Renews Emphasis on Outsourcing. GAO/NSIAD-97-86. Washington, D.C.: March 11, 1997.
The Department of Defense (DOD) is pursuing a new initiative involving a core competency approach for making sourcing decisions--that is, sourcing decisions based on whether the function is core to the agency's warfighting mission. In determining how to best perform non-core functions, DOD's position is that its components should look beyond just the use of public-private competitions under Office of Management and Budget (OMB) Circular A-76 in making sourcing decisions, and consider other alternatives such as partnering or employee stock ownership. GAO was asked to assess (1) the department's progress in assessing its core functions as a basis for sourcing decisions, (2) the plans and progress DOD has made in identifying and implementing alternatives to A-76, and (3) the current status of DOD's A-76 program. Progress in assessing core functions has been varied and limited across major Defense components, affected somewhat by ambiguous definitions of the term "core function." In some instances additional guidance was obtained, but definitions of core remain somewhat broad and subjective, and will likely remain so in the future. Army and Air Force have led within DOD in assessing core functions, but the Army has done the most, and found, contrary to its expectations, that distinguishing between core and non-core functions does not, by itself, prescribe a sourcing decision. Other factors must also be considered such as risk and operational considerations. The range of alternatives to A-76 likely to be pursued under the core competency-based approach is not yet clear, but DOD has made some progress toward identifying and/or using some alternatives through pilot projects and other efforts by the services as they have focused on the core initiative. However, the use of alternatives could be limited without special legislative authorities and/or repeal of various existing prohibitions, and some could be tempered by the department's efforts to meet the A-76 competitive sourcing goals set by OMB. DOD reported that as of June 1, 2003, it has met OMB's short-term goal to use the A-76 process to study 15 percent of the positions identified in DOD's commercial activities inventory by the end of fiscal year 2003. However, meeting the longer-term goal to study at least 50 percent (226,000) of its nearly 453,000 commercial activity positions through fiscal year 2008 will present a challenge. This is nearly double the number of positions that DOD has previously studied during a comparable time period, and providing sufficient resources (financial and technical) to complete the studies may prove challenging. Also, the defense components, particularly the Air Force, plan to transfer certain military personnel into warfighting functions and replace them with government civilian and/or contractor personnel. This will require the components to reprioritize their funding for operation and maintenance accounts, because it is from those accounts the services must fund replacement civilian or contractor personnel.
Scope and Methodology To describe the process and criteria AHRQ used to award its $474 million in Recovery Act CER funds, we reviewed relevant statutes as well as documentation on the process and criteria AHRQ uses to (1) determine the scientific and technical merit of grant applications and contract proposals, and (2) select grant recipients and contractors. We reviewed spending plans, which outline AHRQ’s and HHS’s plans for spending Recovery Act funds. We also reviewed summary statements that describe AHRQ’s process for selecting grantees and contractors to be awarded Recovery Act funds. In addition, we reviewed AHRQ’s Management Operations Manual, the agency’s written guidance that provides policies and procedures for selecting grant recipients. We interviewed AHRQ and other HHS officials to learn about the processes and criteria they used to select the grantees and contractors that received awards funded with the $474 million in Recovery Act CER funds and to coordinate these awards with other HHS agencies that also received Recovery Act CER funds. Because NIH also received Recovery Act CER funds, we conducted interviews with NIH officials to confirm the methods and processes used by AHRQ to coordinate funding opportunity announcements (FOA), contract solicitations, and awards with NIH in an effort to prevent the unnecessary duplication of effort in awarding Recovery Act CER funds. Finally, we obtained data from AHRQ on the number and type of awards made between February 2009 and September 2010 using the $474 million in Recovery Act CER funds. We relied on Recovery Act award data provided by AHRQ and did not audit the reported data. To determine whether AHRQ and the Office of the Secretary’s Recovery Act CER award data were sufficiently reliable for our analyses, we conducted a reliability assessment of the data we used by reviewing existing information about the data, conducting quality control checks, and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To describe AHRQ’s plans to disseminate results from CER funded with Recovery Act funds, we reviewed the Recovery Act and other relevant statutes to determine AHRQ’s responsibilities for disseminating CER. We reviewed agency documents, including AHRQ contractors’ work plans describing specific goals and activities; AHRQ’s general publications, including reports and guides posted on AHRQ’s website and electronic newsletters; and samples of AHRQ’s CER, including original research reports, treatment guides, and slide presentations used for educating clinicians. We also interviewed AHRQ officials and a contractor to understand how the agency plans to disseminate the results of CER funded with Recovery Act funds and to obtain information on plans AHRQ has for assessing the effectiveness of its dissemination efforts. To describe the steps AHRQ has taken to fulfill its roles and responsibilities related to PCORI under PPACA, we reviewed provisions in PPACA to identify these roles and responsibilities, which include to broadly disseminate CER findings to various audiences; develop a publicly available database to collect evidence and research; and promote the timely incorporation of CER findings into health information technology systems that support clinical decision making. While AHRQ is required to conduct a number of activities under PPACA, we focused our review on those activities that are related to PCORI. We also reviewed AHRQ’s PPACA spending plan for fiscal years 2011 and 2012, which describes the agency’s plans for using funds made available by PPACA, as well as PCORI presentation materials and meeting reports. We also conducted interviews with AHRQ officials to determine the steps the agency has taken to meet its responsibilities related to PCORI under PPACA. We conducted this performance audit from February 2011 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background AHRQ supports CER by awarding grants and contracts to entities in order to conduct CER and perform related activities, such as the dissemination As 1 of 12 agencies within HHS, AHRQ’s overarching of CER results.mission is to improve the quality, safety, efficiency, and effectiveness of health care for all Americans. (For more information on AHRQ’s mission, research, priorities, and budget, see app. I.) CER and the Recovery Act The Recovery Act provided a significant amount of funding for AHRQ to conduct CER activities. (See table 1.) HHS developed departmentwide priorities for CER. (See table 2.) Of the total amount of $474 million in Recovery Act funds available to AHRQ for CER, $174 million of these funds were allocated to AHRQ by the Office of the Secretary and were used to support the HHS departmentwide priorities for CER. AHRQ developed seven agency-specific CER priority areas to guide its spending of Recovery Act funds. (See table 3.) Of the total amount of $474 million in Recovery Act funds available to AHRQ for CER, $300 million of these funds were appropriated to the agency and, therefore, supported the agency’s seven CER priority areas. The enactment of PPACA in 2010 gave AHRQ new roles and responsibilities related to disseminating CER and building capacity for research, and appropriated funds for carrying out these activities. Several of these responsibilities relate to work conducted by PCORI. Established in November 2010, PCORI was authorized to help coordinate CER at a national level by developing national priorities for CER and conducting and funding CER activities. PPACA directs AHRQ to broadly disseminate CER findings to physicians, other health care providers, patients, payers, and policymakers; develop a publicly available database to collect evidence and research; promote the timely incorporation of CER findings into health information technology systems that support clinical decision making; and establish a process for receiving feedback about the value of information disseminated by AHRQ. To fund this work, PPACA established the Patient-Centered Outcomes Research Trust Fund (PCORTF). The act specified that percentages of this trust fund be provided to the Secretary of HHS and AHRQ in each of Specifically, AHRQ received $8 million fiscal years 2011 through 2019. from this trust fund in fiscal year 2011 and will receive $24 million in fiscal year 2012, representing 16 percent of the total amount appropriated to this trust fund in each of these fiscal years. In subsequent fiscal years, AHRQ will continue to receive 16 percent of the total amount appropriated to the trust fund, which will be based on the net revenues from fees on health insurance and self-insured plans, amounts transferred from the Medicare trust funds, and appropriations to PCORTF from the General Fund of the Treasury. AHRQ’s Standard Competitive Process for Selecting Grant Recipients A portion of AHRQ’s work, including its work related to CER, is conducted through grants awarded to research centers and academic organizations to fund research ideas developed by a grant applicant. Grant applications are submitted in response to publicly available FOAs, which announce AHRQ’s intention to award research grants. AHRQ has established a standard competitive process that is governed by federal law to select According to AHRQ officials, this multistep process grant recipients.includes: (1) an initial review of received applications; (2) preliminary scoring of applications; (3) review and final scoring of applications at a peer review panel meeting; (4) the development of preliminary funding recommendations; (5) review by a senior leadership team within AHRQ; and (6) a final determination of funding by the agency director. The trustee of PCORTF is to provide for the transfer from PCORTF of 20 percent of the amounts appropriated or credited to PCORTF for each of fiscal years 2011 through 2019 to the Secretary of HHS. Of the amounts transferred, the Secretary of HHS is to distribute 80 percent to AHRQ and 20 percent to the Secretary of HHS. See 26 U.S.C. § 9511. To evaluate the grant applications it receives in response to FOAs, AHRQ’s peer reviewers, most of whom are authorities in their respective fields and not government employees, use five standard core criteria to score and rank the applications. These five standard core criteria are (1) the significance in addressing an important problem; (2) the investigators’ ability to carry out the research; (3) the originality or innovation of the project; (4) the development of an adequate research approach or framework; and (5) the scientific environment in which the applicant plans to conduct the research. Each FOA also contains criteria that are specific to the announcement. While these other specific criteria are not individually scored, they are used to evaluate the applications during the peer review panel meetings. (See fig. 1 for an overview of AHRQ’s standard process for selecting grant recipients.) AHRQ uses a separate competitive process to award contracts, which fund specific activities defined by AHRQ. This process for selecting contract proposals for award is governed by the Federal Acquisition Regulation (FAR) and the Public Health Service Act (PHSA) and implementing regulations.opportunities, which occurs through different types of solicitations, varies depending on the type of contracting mechanism used. AHRQ generally uses three types of contracting mechanisms. The advertising of available contracting Stand-alone contracts. This contracting vehicle involves the issuance of new, stand-alone contracts. Proposals are submitted in response to publicly-available solicitations referred to as requests for proposals. A request for proposals details the specific tasks or ideas that an agency needs a contractor to fulfill, such as delivery of a certain service or research of a clearly defined topic. Task orders. This contracting vehicle involves issuing task orders under an existing, master contract, thereby giving a contractor a new task to perform. Proposals are submitted in response to solicitations called requests for task orders, which are issued to contractors already awarded contracts by the agency. General Services Administration (GSA)-schedule task orders. This contracting vehicle involves the use of contracts that have been awarded by GSA for governmentwide use. GSA-schedule task orders are issued under existing master contracts awarded by GSA. These task orders are solicited through a request for quote solicitation that is competed among these contractors. Upon receiving proposals in response to a solicitation, AHRQ typically evaluates contract proposals for award using standard contracting procedures and criteria, which are governed by the FAR and the PHSA and implementing regulations. A technical review panel of agency officials and external experts evaluates each proposal submitted in response to a solicitation against standard criteria that are tailored to the specific needs of each solicitation. These criteria include (1) demonstrated knowledge and understanding of the contract requirements; (2) the proposed approach to address tasks and subtasks listed in the proposal; (3) the qualifications and experience of key management personnel, such as the project director and project manager; (4) the potential contractor’s ability to meet the project’s milestones; (5) the facility, equipment, and space available to support the project goals and objectives; and (6) the past performance of the potential contractor using information from references or other government customers. Based on this evaluation of each proposal’s scientific and technical merit and cost, the review panel, along identifies with the contracting officer’s technical representative (COTR),the entity they believe should be awarded the contract and forwards the recommendation to the contracting officer, the federal official who has authority to enter into a contract. The contracting officer reviews the recommendation and makes a final award decision. AHRQ Used Standard Competitive Processes and Criteria and Coordinated within HHS to Make Recovery Act Awards AHRQ used its standard, competitive review process and criteria to select grant recipients and award 110 Recovery Act-funded CER grants, totaling approximately $311 million. In addition, AHRQ primarily used its standard contracting processes and criteria to select contract proposals and enter into 34 contracts for CER using Recovery Act funding, totaling approximately $161 million. AHRQ also took several steps to coordinate with other HHS agencies when soliciting and awarding CER grants and contracts. AHRQ Used Its Standard Competitive Review Process and Criteria to Select Grant Recipients and Award 110 CER Grants Between February 2009 and September 2010, AHRQ used its standard, competitive grant review process to select grant recipients and ultimately award 110 CER grants using approximately $311 million in Recovery Act CER funds. Specifically, AHRQ used its standard process for selecting grantees, which includes an initial review and preliminary scoring of applications; review and final scoring of applications at peer review panel meetings; development of preliminary funding recommendations; review of funding recommendations by a senior leadership team within AHRQ; and a final determination of funding by the agency director. As part of the standard process AHRQ used to select the recipients of Recovery Act-funded CER grants, peer reviewers used the agency’s standard core criteria to score and subsequently rank the applications for this funding. In addition to its core criteria, AHRQ also used other criteria in its process of selecting recipients of Recovery Act CER grants. These other criteria were specific to each FOA and often varied depending on the CER study requested under that announcement. These other criteria may be used to assess, for example, a grant application in terms of the adequacy of the protection afforded human subjects; the inclusion of the extent to which privacy and certain priority populations in the study;security issues have been addressed; the partnerships that the applicant has with the proposed population; and the degree of responsiveness in addressing the purpose and objective of the FOA. AHRQ officials told us that the agency’s peer reviewers, program officials, and members of the senior leadership team used both standard core criteria and other criteria outlined in the FOAs to determine which grant applications should be recommended to the Director of AHRQ for funding. AHRQ issued 14 FOAs for Recovery Act CER grant opportunities. Using its grant review process and criteria, AHRQ received 536 grant applications and awarded 110 CER grants between February 2009 and September 2010, totaling approximately $311 million. $311 million in grants AHRQ awarded supported HHS’s departmentwide and AHRQ’s agency-specific CER priorities. (See fig. 2 and app. II for more information on the award of AHRQ’s CER grants with Recovery Act funds.) The 14 FOAs issued and 110 CER grants awarded addressed projects in three of the four HHS-departmentwide CER priority areas and three of AHRQ’s seven agency-specific CER priority areas. Some priority areas were supported by more than one FOA. AHRQ agency-specific and HHS departmentwide priority areas not supported with grants were supported through contracts. AHRQ made these awards by September 30, 2010, the end of the period in which the Recovery Act funds were available for obligation. See GAO-11-712R. AHRQ awarded an additional $161 million through contracts and spent approximately $2 million for administrative purposes. AHRQ supported the remaining four CER priority areas through contracts. AHRQ supported the one remaining HHS departmentwide CER priority area through contracts. Percentages do not add up to 100 percent due to rounding. For 55 of the 110 grants, the Director of AHRQ exercised her discretion to make out-of-order funding decisions. An out-of-order funding decision occurs when grant applications are funded out of rank order; that is, they are not funded in accordance with the rank order of most-meritorious to least-meritorious overall impact scores calculated during the peer review process. Out-of-order funding decisions can be made for a number of reasons, including the need to address important agency research priorities, avoid duplication, or meet specific requirements in the original FOA that the more meritorious applicants cannot meet. Recommendations for out-of-order funding decisions are made by the program official or senior leadership team, but the final decision to award grants is made by the Director of AHRQ. AHRQ Primarily Used Its Standard Competitive Contracting Process and Criteria to Award 34 CER Contracts Between February 2009 and September 2010, AHRQ primarily used its standard competitive contracting process and criteria to select contract proposals and enter into 34 contracts using approximately $161 million in Recovery Act CER funds. According to AHRQ officials, a review panel, composed of external experts and AHRQ staff, evaluated all Recovery Act CER contract proposals using the standard criteria that are tailored to the specific needs of each contract solicitation. These criteria include evaluating each proposal’s technical approach, management plan, and key personnel. AHRQ officials reported that a contracting officer used the results of the panel’s evaluation to make a final selection of contractors that presented the best value to meet the needs of work specified in each Recovery Act CER solicitation. To meet the September 30, 2010, deadline established by the Recovery Act, AHRQ made one change to its standard contracting process. Specifically, for 1 of the 13 contract solicitations AHRQ conducted an initial review of the contract proposals it received in order to determine whether these contract proposals were duplicative or responsive to the solicitation’s requirements.proposals in response to this solicitation. Agency officials explained that because they received a large number of proposals in response to this solicitation, they decided to conduct the initial review to identify which of the received proposals would continue through the agency’s standard, competitive contract review process. Four of the 23 task order contract proposals were found to be nonresponsive or duplicative of another previously funded study and, therefore, were not considered for further review. The agency received 23 task order contract AHRQ awarded task orders under existing AHRQ master contracts, task orders under existing GSA master contracts, and stand-alone contracts using Recovery Act CER funds. The agency primarily awarded task orders under existing master contracts when awarding contracts with Recovery Act CER funds. Specifically, of the 34 contracts that AHRQ awarded using Recovery Act CER funds, 30 were task orders under either existing AHRQ contracts or existing GSA-schedule contracts, and the remaining 4 were stand-alone contracts. AHRQ awarded multiple task orders within the Evidence Synthesis and Gap Identification priority areas under existing contracts that AHRQ entered into prior to the passage of the Recovery Act. Officials stated that issuing task orders under existing master contracts, which included a GSA-schedule task order, facilitated the quick and efficient award of Recovery Act funds in instances where the agency or GSA had existing master contracts with entities capable of conducting work the agency wanted to support with Recovery Act CER funds. AHRQ officials stated that this approach was faster and more cost-effective than entering into new, stand-alone contracts. AHRQ officials also said that they used stand-alone contracts only in instances where there were existing contracts with entities that could perform the planned work. AHRQ issued 13 CER contract solicitations between February 2009 and September 2010. Using its standard contracting process and criteria, the agency received 80 contract proposals and entered into 34 contracts totaling almost $161 million. departmentwide and AHRQ’s agency-specific CER priorities. (See fig. 3 and app. III for more information on the award of AHRQ’s CER contracts with Recovery Act funds.) The 13 contract solicitations issued and 34 CER contracts awarded addressed projects in AHRQ and HHS-departmentwide CER priority areas. Some priority areas were supported by more than one contract solicitation. AHRQ agency-specific and HHS departmentwide priority areas not supported with contracts were supported through grants. AHRQ made these awards by September 30, 2010, the end of the period in which the Recovery Act funds were available for obligation. See GAO-11-712R. AHRQ awarded an additional $311 million through grants and spent approximately $2 million for administrative purposes. Percentages do not add up to 100 percent due to rounding. AHRQ Also Reported Taking Steps to Coordinate Recovery Act CER Awards with Other HHS Agencies to Avoid Unnecessary Duplication AHRQ officials reported that they used five mechanisms in order to coordinate with other HHS agencies to avoid unnecessary duplication when creating FOAs for grants and solicitations for contracts and when awarding Recovery Act CER funds. Specifically, AHRQ participated in a federal interagency coordination council and an HHS working group; contributed to an HHS spending plan to coordinate the department’s solicitations; participated with NIH in another working group to coordinate both solicitations and CER awards; and queried HHS databases when awarding Recovery Act funds to identify potentially duplicative projects. Federal Coordinating Council for CER—AHRQ participated on the Federal Coordinating Council for CER (“the Council”), a body created by the Recovery Act to foster coordination for CER across the federal government in an effort to reduce duplication and encourage the coordinated and complementary use of resources. In addition to AHRQ, officials from the Veterans Health Administration (VHA), the Department of Defense, and NIH also served on the Council. According to AHRQ officials, the Council provided a mechanism for coordinating, among other things, the establishment of CER priorities and some Recovery Act CER grant announcements and contract solicitations. The Council was terminated by PPACA in March 2010. CER Coordination and Implementation Team (CER-CIT)—In addition to the Council, AHRQ officials participated in the CER-CIT, a departmentwide effort to coordinate investments in CER supported with Recovery Act funds. Organized by HHS, the CER-CIT served as a centralized forum for HHS officials to assess FOAs for grants and solicitations for contracts. AHRQ officials stated that the CER-CIT’s process helped ensure that the FOAs and solicitations ultimately posted by AHRQ for grants and contracts were not duplicative of FOAs and solicitations posted by other entities within HHS. For example, during the CER-CIT’s review of two proposed AHRQ CER FOAs, reviewers identified aspects of the proposed announcements that were potentially duplicative of other proposed or existing projects. HHS Intra-Agency Spending Plan—AHRQ contributed to an intra- agency spending plan developed by HHS that describes how all HHS agencies anticipated using the funding they received under the Recovery Act for CER. AHRQ contributed to this intra-agency spending plan by developing an agency-specific spending plan that described AHRQ’s research priorities and how the agency anticipated using its $300 million in Recovery Act CER funds to support these priorities. HHS incorporated AHRQ’s spending plan into the department’s intra-agency spending plan. According to AHRQ officials, officials from the HHS Office of the Secretary, who were responsible for coordinating this effort, reviewed the spending plans it received from AHRQ and other agencies, and this helped ensure that AHRQ’s Recovery Act CER solicitations were not unnecessarily duplicative of other CER efforts within HHS. AHRQ-NIH Working Group—In addition to the departmentwide working group that was primarily focused on coordination of FOAs and contract solicitations, AHRQ and NIH formed a working group to coordinate the award of Recovery Act funds to avoid unnecessarily funding duplicative projects.making awards, they checked with NIH through this working group to ensure that the two agencies were not funding duplicative projects. For example, during one meeting members noted where issues of duplication needed to be further discussed to ensure that studies were complementary and not duplicative. In addition to reviewing awards, AHRQ officials reported that this working group met regularly during the award of the Recovery Act CER funds to share spending plans, share solicitations, and provide updates on each agency’s respective CER activities. AHRQ officials stated that before Querying of HHS Databases—AHRQ officials stated that in order to avoid funding unnecessarily duplicative work with Recovery Act funds, they queried HHS databases prior to awarding any Recovery Act CER funds to ensure that other similar projects were not funded elsewhere within HHS. AHRQ officials stated that if, in the process of querying these databases, a duplicative award was identified, AHRQ would contact the appropriate HHS project officer listed in the database to discuss the award in more detail. AHRQ and NIH officials confirmed that this process resulted in the identification of grant proposals for training awards that were potentially duplicative of projects AHRQ had funded. Once identified, it was decided that NIH would not fund these potentially duplicative awards. AHRQ Plans to Use Its Existing Mechanisms and Develop Additional Strategies to Disseminate CER Results AHRQ plans to use a range of existing mechanisms, such as written products, training, social media tools, and its website, to disseminate results of CER funded through the Recovery Act. According to AHRQ officials, the agency will determine which specific mechanisms will be used to disseminate CER results by considering the unique characteristics of the research. In addition, AHRQ awarded four contracts using Recovery Act CER funds to develop and implement innovative approaches for disseminating CER results, including Recovery Act- funded CER. AHRQ Plans to Use a Range of Existing Mechanisms, Such as Written Products, Training, Social Media Tools, and Its Website, to Disseminate Recovery Act CER Results AHRQ officials stated that the agency plans to use a range of existing mechanisms to disseminate Recovery Act-funded CER results as such results become available. As of December 6, 2011, 30 Recovery Act CER projects were completed or in draft and some dissemination activities had begun. The mechanisms AHRQ plans to use to disseminate Recovery Act-funded CER include written products, training programs, social media tools, learning networks, and AHRQ’s website. The different types of written products that AHRQ develops for CER results and other research include comprehensive research reviews that summarize existing research on a CER topic; original research reports that introduce new CER results; and plain language publications that summarize the findings of research on the benefits and harms of different treatment options and which are tailored to clinicians, consumers, or policymakers. AHRQ’s training programs include web-based conferences that feature presentations by experts accompanied by instructional slides for clinicians. In addition, the agency employs social media tools to disseminate notices of CER results including electronic newsletters, audio podcasts, and Twitter. AHRQ is also drawing on an existing agency learning network for Medicaid medical directors that was created in 2005. This group convenes periodically to discuss ways to advance the health of Medicaid beneficiaries, including how evidence-based research AHRQ’s website findings can be used to improve quality of care.provides access to CER results through search tools and links to its written and social media formats. (See app. IV for a more detailed description of these existing mechanisms.) AHRQ officials explained that the agency determines which specific mechanisms will be used to disseminate particular CER results by considering the unique characteristics of the research such as the type of research conducted, its potential impact, the strength of the evidence, and the audiences that can best make use of the information. AHRQ then develops a marketing plan that identifies key messages, target audiences, and the mechanisms to be used to disseminate CER to those audiences. For example, AHRQ’s marketing plan for a CER project that examines certain treatments for type 2 diabetes targeted consumers as well as primary care clinicians and certain specialist clinicians and other health professionals. While this CER project was not funded with Recovery Act funds, AHRQ officials confirmed that the process they use to customize the dissemination for this project is the same process the agency will follow for disseminating Recovery Act-funded CER results. To disseminate the CER results of this project to consumers, AHRQ developed a consumer guide and a series of audio podcasts. To reach clinicians, AHRQ developed a clinician’s guide and a webcast program with educational slides. These products were targeted to be distributed through multiple channels including AHRQ’s website, as well as its newsletters and list-servs. Notices about these products were also sent directly to a range of general media news services; consumer health and advocacy publications; and a wide range of key national organizations that included those representing primary care, specialty clinicians, and payers. (See app. V for a more detailed description of this dissemination effort.) AHRQ Is Developing Additional Strategies to Disseminate CER Results In addition to its existing mechanisms for disseminating the results of CER, AHRQ is in the process of developing additional dissemination strategies. Specifically, in September 2010, AHRQ awarded four Recovery Act-funded contracts to develop and implement innovative approaches for disseminating CER results, including Recovery Act- funded CER. The specific purpose of each of the four contracts is described below. Academic Detailing. Academic detailing involves face-to-face educational sessions by trained clinicians, including physicians, nurses, pharmacists, and others, who visit health professionals in their practice settings. The goal of these sessions is to share evidence- based information and facilitate use of that information to improve patient care. AHRQ awarded a contract in the amount of $11,680,060 for the purpose of implementing academic detailing for CER from 2011 through 2013. The plan under this contract calls for academic detailing to 1,300 primary care providers and 200 health care system practice sites. Each provider or site will receive one face-to-face visit every 6 months plus follow-up e-mail communications and supporting materials. The academic detailing will focus on six CER topics over the 3-year period. Between February 2011 and October 2011, work completed under this contract resulted in over 1,562 visits to providers and practice sites. These visits involved the discussion of AHRQ’s CER results related to the treatment of type 2 diabetes. Continuing Education Modules. AHRQ awarded a contract in the amount of $3,981,168 for the purpose of developing and disseminating 45 accredited online continuing education programs for health care professionals, including physicians, physician assistants, pharmacists, nurses, nurse practitioners, medical assistants, and other health professionals. These programs translate CER results into a variety of formats, for example, videos featuring case studies and journal supplements. As of November 30, 2011, 13 approved continuing education programs were completed, including programs on CER results related to hip fractures, hypertension, prostate cancer, breast cancer, heart disease, and diabetes. Regional Dissemination and Partnership Offices. AHRQ awarded a contract in the amount of $8,613,876 to create five regional offices for the purpose of establishing partnerships to facilitate dissemination and use of CER results by regional health care organizations, businesses, unions, and consumer groups. Collaborative efforts are expected to result in local and regional meetings, web conferences, training programs, and distribution of CER results to partner organizations’ memberships. Publicity Center. AHRQ awarded a contract in the amount of $17,999,988 to develop and implement a national strategic communications plan for AHRQ’s CER results. The communications plan calls for the development of national partnerships with consumer, clinician, policymaker, and business audiences; marketing efforts, including the use of social media, focused on disseminating results of research; and creation of new website portals with established sites reaching patients and clinicians. For example, under the contract, partnerships have been established with such organizations as the National Rural Health Association, the National Alliance for Caregiving, the American College of Cardiology, and the American Medical Student Association. Along with its efforts to develop additional strategies for disseminating CER results, AHRQ is taking steps to evaluate the effectiveness of these strategies. Specifically, using its Recovery Act funds AHRQ awarded a contract in the amount of $2,371,179 for the purpose of evaluating some of AHRQ’s dissemination strategies by collecting data about dissemination. Under the contract, information will be collected about changes over time in the level of awareness, understanding, use, and perceived benefits of CER . This information will be gathered from clinicians, patients, consumers, health system decision makers, purchasers, and policymakers. In addition, this evaluation includes plans to collect process and outcomes data for each of the additional dissemination strategies being developed under Recovery Act-funded contracts including academic detailing, continuing education, regional dissemination, and the national publicity center. AHRQ officials noted that evaluating the impact of dissemination of its CER results is important but also challenging. They noted that clinician practice behavior often changes slowly and is affected by many variables, thereby making it difficult to directly attribute changes to information AHRQ has disseminated. In addition, once CER results are disseminated to target audiences through, for example, AHRQ’s website or one of its educational programs, it is often not feasible to track secondary dissemination from those audiences to others. AHRQ Has Begun to Monitor PCORI and Identify Resources That Could Enable It to Fulfill Its PPACA Responsibilities Related to PCORI While PCORI is in the early stages of development, AHRQ has begun to monitor PCORI’s needs to determine what resources might be needed by AHRQ to fulfill its PPACA responsibilities related to PCORI and identify existing resources that the agency can use to fulfill these responsibilities. These responsibilities include broadly disseminating the research findings published by PCORI; developing a publicly available database to collect government-funded evidence and research from public, private, not-for-profit, and academic sources; promoting the timely incorporation of PCORI-generated CER findings into health information technology systems that support clinical decision making; and establishing a process for receiving feedback about the value of information disseminated by AHRQ. AHRQ officials report that they are monitoring PCORI’s needs to determine what resources might be needed by AHRQ to fulfill its PPACA responsibilities related to PCORI. The director of AHRQ serves on PCORI’s Board of Governors and another high-level AHRQ official serves on PCORI’s methodology committee, which allows AHRQ to obtain information on the resources the institute might need and when these resources might be needed. In addition, according to AHRQ officials, the agency has shared information with PCORI members about AHRQ’s existing resources at various PCORI meetings. AHRQ officials reported that they are also in the process of identifying existing resources, including existing capabilities and ongoing projects, that the agency can leverage to fulfill its responsibilities related to PCORI. For example, AHRQ officials are exploring whether contracts the agency awarded to evaluate AHRQ’s CER dissemination efforts could be leveraged to meet the agency’s responsibilities to obtain, on behalf of PCORI, feedback from health care professionals on the CER information disseminated by AHRQ. In addition, AHRQ is currently assessing whether a research database being developed by HHS’s Office of the Assistant Secretary for Planning and Evaluation could be used to, among other things, store and make publicly available CER funded and generated by PCORI. In addition, AHRQ has developed spending plans for fiscal years 2011 and 2012 that describe how AHRQ will use the funds it receives from PCORTF to fulfill the agency’s responsibilities related to PCORI. These plans describe proposed FOAs and contract solicitations that would expand opportunities for AHRQ to disseminate CER information through a variety of channels to different target audiences, for example, public service announcements targeting consumers and symposia and publications targeting researchers and health care professionals. AHRQ officials stated that the fiscal year 2011 plan has been approved by OMB, and the fiscal year 2012 plan was under review by HHS as of December 2011. AHRQ officials stated that they have issued one FOA for a project described in the fiscal year 2011 spending plan but, as of December 2011, have not made any awards. Agency Comments We provided a draft of this report to AHRQ for review and comment. AHRQ provided technical comments, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: The Agency for Healthcare Research and Quality’s (AHRQ) Mission, Research, Priorities, and Budget AHRQ’s mission is to improve the quality, safety, efficiency, and effectiveness of health care for all Americans. The purpose of AHRQ’s research is to help people make more informed decisions and improve the quality of health care services. AHRQ, formerly known as the Agency for Health Care Policy and Research, is 1 of 12 agencies within the U.S. Department of Health and Human Services (HHS). While the National Institutes of Health focuses on biomedical research to prevent, diagnose, and treat disease and the Centers for Disease Control and Prevention focuses on population health and the role of community-based interventions to improve health, AHRQ’s research focus is on long-term and systemwide improvement of health care quality and effectiveness. AHRQ conducts work in five broad focus areas. These areas include comparing effectiveness of treatments; quality improvement and patient safety; health information technology; prevention and care management; and health care value. Comparing the effectiveness of treatments. AHRQ’s comparative provides patients and physicians with effectiveness research (CER)information on which medical treatments work best for a given condition. This includes comparisons of drugs, medical devices, tests, surgeries, or ways to deliver health care in an effort to help patients and their families understand which treatments work best and how their risks compare. Initiatives under this focus area include: The John M. Eisenberg Center for Clinical Decisions and The Centers for Education and Research on Therapeutics The Developing Evidence to Inform Decisions about Effectiveness Quality improvement and patient safety. AHRQ funds and disseminates research that identifies root causes of threats to patient safety, provides information on the scope and impact of medical errors, and examines effective ways to make system-level changes to help prevent errors. Initiatives under this focus area include: Patient safety culture assessment tools Health information technology (health IT). AHRQ provides support to give access to and encourage the adoption of health IT. The agency has focused its health IT activities on the following three goals: Improve health care decision making Improve the quality and safety of medication management Prevention and care management. AHRQ translates evidence- based knowledge into recommendations for clinical preventative services. AHRQ initiatives under this focus area include: The U.S. Preventative Services Task Force The Patient-Centered Medical Home The Practice-Based Research Network Health care value. AHRQ aims to find greater value in health care by producing the measures, data, tools, evidence, and strategies that health care organizations, systems, insurers, purchasers, and policymakers need to improve the value and affordability of health care. Initiatives under this focus area include: The Medical Expenditure Panel Survey The Healthcare Cost and Utilization Project The annual National Healthcare Quality Report and National State snapshots The Consumer Assessment of Healthcare Providers and Systems The National Guideline Clearinghouse The National Quality Measures Clearinghouse In addition to the above focus areas, AHRQ also conducts crosscutting activities related to quality, effectiveness, and efficiency. Activities include data collection and measurement; dissemination and translation; and program evaluation. In addition, support is provided for the investigator- initiated and targeted research grants and contracts that focus on health services research in the areas of quality, effectiveness, and efficiency. These activities provide the core infrastructure used by the other focus areas. AHRQ staff and budget. AHRQ currently employs approximately 300 staff. The agency’s fiscal year 2010 budget was $402.6 million, of which $270.7 million went to research on health costs, quality, and outcomes. The President’s fiscal year 2012 budget request for AHRQ was $366.4 million, a decrease of approximately $31 million from fiscal year 2010. (See table 4 for the funding amounts under AHRQ’s focus areas.) Appendix II: AHRQ CER Grants Awarded Using Recovery Act CER Funds, by Priority Area AHRQ and the HHS Office of the Secretary jointly funded one grant. For purposes of our report, we counted this grant under the number of grants awarded for the HHS Office of the Secretary’s Dissemination and Translation priority area (see table 7). However, the amount of Recovery Act CER funds the HHS Office of the Secretary and AHRQ awarded to this grant are reflected under the amount awarded for the HHS Office of the Secretary’s Dissemination and Translation priority area and AHRQ’s Evidence Generation priority area. AHRQ and the HHS Office of the Secretary jointly funded one grant. For purposes of our report, we counted this grant under the number of grants awarded for the HHS Office of the Secretary’s Dissemination and Translation priority area. However, the amount of Recovery Act CER funds the HHS Office of the Secretary and AHRQ awarded to this grant are reflected under the amount awarded for the HHS Office of the Secretary’s Dissemination and Translation priority area and AHRQ’s Evidence Generation priority area. Appendix III: AHRQ CER Contracts Awarded Using Recovery Act CER Funds, by Priority Area AHRQ combined Evidence Synthesis and Gap Identification awards under a single solicitation when announcing the availability of these funds and within awards because, according to agency officials, having a single solicitation for these two priority areas reduced the amount of work related to these awards, thereby expediting the award process. As a result, AHRQ funded both of these priority areas, but advertised projects and made awards for these priority areas under a single solicitation. Appendix IV: AHRQ Mechanisms That Support Dissemination of CER Types of mechanisms supporting dissemination Written products Examples of AHRQ’s dissemination mechanisms Research reviews and original research reports: These written products draw on completed scientific studies to make comparisons of different health care interventions or summarize original clinical research to explore practical questions about the effectiveness of treatments. Summary treatment guides for clinicians, consumers, policymakers: Short, plain- language guides summarize the findings of research reviews on the benefits and harms of different treatment options. Education modules and presentation slides: These resources are for clinicians pursuing continuing education credits and for faculty who are instructing clinicians. Webcasts: Researchers and clinicians participate in online programs to discuss research findings. Conference series: Scientific meetings on state-of-the art concepts in communication, health literacy, and medical decision making. Audio podcasts: The Healthcare 411 audio podcast series shares news and information with consumers that they can use in health care decision making, through 60-second audio news programs and longer format interviews. Online videos: AHRQHealthTV provides videos for consumers about a range of health topics on AHRQ’s YouTube channel. Twitter updates: Short messages are broadcast that can be accessed by computer or mobile phone. RSS Feeds: Subscribers receive news and alerts about AHRQ programs through their RSS reader. E-mail updates: Subscribers receive e-mail updates on topics they are interested in. The Medicaid Medical Directors Learning Network: This is one example of a network formed by AHRQ to create an ongoing collaborative relationship to disseminate AHRQ products, tools, and research to help members make policy and practice decisions related to clinical treatment. Impact Case Studies: AHRQ tracks and summarizes how AHRQ-funded research, tools, and products are actually implemented by state governments, medical practices, clinics, and hospitals. Makes summary information available to other potential users. AHRQ’s website ahrq.gov provides access to its written products, training programs, and social media tools, as well as useful search functions and other resources. Health Care Innovations Exchange: The Exchange offers health professionals and researchers searchable tools to access information about evidence-based innovations suitable for a range of health care settings and populations, as well as opportunities to network with other professionals who have implemented these innovations. Appendix V: Example of Dissemination of Comparative Effectiveness Research by the Agency for Healthcare Research and Quality Appendix V: Example of Dissemination of Comparative Effectiveness Research by the Agency for Healthcare Research and Quality Comparative Effectiveness and Safety of Premixed Insulin Analogues in Type 2 Diabetes: A Systematic Review Type 2 diabetes is an increasingly common chronic disease that occurs in people who have too much glucose in their blood. Blood glucose levels are high either because their cells are resistant to insulin (a hormone that helps convert glucose into energy) or because their pancreas does not produce enough insulin. Insulin analogues are used approximately as commonly as human insulin by diabetics who require insulin to regulate blood glucose levels. Created by genetically modifying human proteins, insulin analogues were developed as an alternative to human insulin to provide tighter control of blood sugar levels. This study summarizes the effectiveness of insulin analogues against traditional human insulin for type 2 diabetics. Researchers compared the effectiveness of three kinds of synthetic insulin against their human insulin counterparts, against each other, and against other antidiabetic medications. The report found that insulin analogues are more effective than human insulin for treating certain diabetes-related symptoms such as high blood sugar after meals. However, it also found that human insulin appears to be more effective than insulin analogues in treating other aspects of diabetes, including lowering blood sugar levels when patients go 8 hours or more without eating, typically overnight. Consumer guide (for adults) Clinician’s guide Webcast and slides (for clinicians) Audio podcast (for consumers) Over 90 target organizations, publications, and electronic venues are identified in the marketing plan in the following categories: clinicians, insurers, payers, pharmacy and drugs associations, federal direct and funded medical care programs, consumer-oriented disease organizations, and government. Provider categories targeted include retail and health system pharmacists; family physicians and general internists; pharmacologists; nurse practitioners; physician assistants; and endocrinologists. General media news services including radio, television news, and major daily newspapers; consumer and advocacy publications; African-American media; and translation to Spanish-only and Hispanic media. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, E. Anne Laffoon, Assistant Director; Shaunessye Curry; Mary Giffin; Andrea E. Richardson; Lisa Motley; Krister Friday; and Jessica C. Smith made key contributions to this report.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $1.1 billion to the Department of Health and Human Services (HHS) for comparative effectiveness research (CER), which is research that compares different interventions and strategies to prevent, diagnose, treat, and monitor health conditions. Of this amount, HHS’s Agency for Healthcare Research and Quality (AHRQ) received $474 million to support and disseminate the results of CER. GAO was asked to describe issues including the (1) process and criteria AHRQ used to award Recovery Act funds for CER, including steps to coordinate CER awards with other HHS entities in order to avoid unnecessary duplication of effort; and (2) plans AHRQ has for disseminating the results of CER it funded under the Recovery Act. To address these objectives, GAO reviewed relevant documentation, including AHRQ’s policies and procedures for selecting the recipients of grants; internal documents that describe the award of Recovery Act grants and contracts; and Recovery Act contractors’ work plans. GAO also analyzed AHRQ data on the number and type of grants and contracts awarded Recovery Act CER funds. GAO interviewed AHRQ officials on the selection of Recovery Act CER grantees and contractors, including coordination with other HHS agencies that received Recovery Act CER funds, and the plans the agency has to disseminate the results of CER funded by the Recovery Act. AHRQ provided technical comments, which GAO incorporated as appropriate. AHRQ used its standard, competitive review processes and criteria to select the recipients of CER grants and contracts using Recovery Act funds. Specifically, to select the recipients of Recovery Act CER grants, AHRQ used its standard review process that includes peer review of grant applications, the development of funding recommendations by a team of senior officials within AHRQ, and final funding determination by the agency’s director. As part of this process, AHRQ used its standard criteria to evaluate grant applications, as well as additional requirements that were specific to each funding opportunity. To select contractors who would receive Recovery Act funds, AHRQ used its standard contracting processes and criteria that are governed by the Federal Acquisition Regulation, which establishes uniform policies for acquisition of supplies and services by executive agencies, and the Public Health Service Act. These processes included an evaluation of all contract proposals using standard criteria adapted to the specific needs of each project. Between February 2009 and September 2010, AHRQ awarded $311 million of its $474 million in Recovery Act CER funds through 110 grants. AHRQ also awarded $161 million of its Recovery Act CER funding through 34 contracts. The contracts and grants AHRQ awarded supported both AHRQ’s agency-specific and HHS’s departmentwide CER priority areas. In an effort to avoid unnecessary duplication of CER awards, AHRQ participated in HHS working groups, developed a CER spending plan, and queried HHS databases to check for duplicative awards. According to AHRQ officials, the agency plans to disseminate the results of Recovery Act-funded CER using a range of existing mechanisms. These mechanisms include written products, training programs, social media tools, and AHRQ’s website. AHRQ is also developing additional strategies to disseminate CER results. AHRQ awarded four contracts using Recovery Act funds totaling approximately $42.3 million to promote innovative approaches for disseminating CER results. A variety of efforts are conducted under these contracts, including efforts to educate clinicians and develop regional dissemination offices.
Background Under the Communications Satellite Act of 1962, the United States created the Communications Satellite Corporation (now known as COMSAT) to develop, alone or in conjunction with foreign entities, a commercial communications satellite system. Subsequently two treaty organizations were created to provide such services. INTELSAT now comprises 139 member countries and operates 24 satellites providing voice, data, and video communications. Inmarsat was established to provide global maritime communications; since 1985, its services have expanded to the aeronautical sector. Inmarsat operates a global system of eight satellites—four of them operational and four spares. In 1994, Inmarsat established as an affiliate a separate company, ICO Global Communications Limited (ICO), to implement a global system to serve handheld mobile telephones. ICO, which is likely to compete with companies that plan to enter the market, is expected to begin operations by the year 2000. Structure of the Treaty Organizations As treaty organizations, INTELSAT and Inmarsat are made up of parties and signatories. Parties are the national governments that have signed the international agreement. The signatories—the organizations’ owners—are typically government agencies or government-sanctioned monopolistic telecommunications companies. These entities usually control or influence access to telecommunications services within their countries. The signatories are responsible for financing INTELSAT and Inmarsat and have a financial interest in their operations. COMSAT is the signatory for the United States to both organizations and operates as a private corporation subject to U.S. government regulations. COMSAT holds the largest single investment share in each organization—19 percent of INTELSAT and 23 percent of Inmarsat. The signatories, including COMSAT, generally provide a retail—or marketing—function for the organizations within their domestic markets. For example, figure 1 shows how one type of service—a basic telephone call between the United States and France—would be provided through INTELSAT’s satellite system. According to COMSAT, INTELSAT and Inmarsat typically operate by consensus, which depends on reaching agreement among member nations worldwide with different perspectives and interests. INTELSAT and Inmarsat May Have Some Competitive Advantages INTELSAT’s and Inmarsat’s ownership structures may provide them with important competitive advantages that could pose barriers to potential competitors. To provide service within a particular country, a satellite system must gain permission from domestic licensing authorities for, among other things, the right to use the necessary spectrum (radio frequencies), the right to establish necessary ground stations to receive satellite signals, and permission to interconnect on the ground with the domestic telephone system. Because many of these licensing authorities or dominant telecommunications companies are the signatories that own the organizations, they may have a financial incentive to favor INTELSAT and Inmarsat, or their affiliates, over other firms when determining who may do business within their countries. A recent analysis has found that the signatories, in addition to receiving a rate of return on their investments, also generally charge large markups on INTELSAT and Inmarsat services, which may be evidence that they have benefited from these competitive advantages. The organizations’ treaty status also provides certain privileges and immunities. Exemption from taxation and immunity from lawsuits give these organizations a financial advantage over other competitors. Immunity from lawsuits, for example, may allow them to act in the market in ways that their competitors cannot under U.S. antitrust laws. INTELSAT and Inmarsat also have easier access to locations in space where satellites can be placed (known as orbital slots) and to spectrum. Because these slots and spectrum are scarce resources, easier access to them is an important advantage. In addition, private satellite firms that want to compete with INTELSAT or Inmarsat have been required, under the treaty agreements, to coordinate their business plans with the organizations to ensure that they do the organizations no significant economic harm and cause no technical interference with them. This requirement has meant that the firms have had to share potentially sensitive and proprietary business information with the organizations. According to several agency officials, the tests for economic harm are being phased out, but the tests for technical coordination remain. Both treaty organizations also have relatively easy access to financial capital because they can request it from their signatories as well as through the capital markets. In addition, commercial investors may view these organizations as good risks because of the signatories’ ties to their governments in most countries. According to COMSAT, however, these various factors do not necessarily translate into unfair competitive advantages in the marketplace. In particular, COMSAT believes that while the treaty organizations have some advantages, they also bear responsibilities, such as providing global services at nondiscriminatory rates. U.S. Efforts to Enhance Market Access Because of concerns about market access to foreign countries by U.S. telecommunications companies, including new satellite competitors, the United States is engaged in several efforts to encourage other countries to open their markets to new entrants. For example, within the World Trade Organization, the United States is participating in negotiations for basic telecommunications services to open access to foreign markets. In addition, the Federal Communications Commission (FCC), in its efforts to promote competition in international telecommunications, recently proposed formal rules for foreign-licensed satellite companies to serve the United States only if, among other things, an acceptable level of openness was provided in the foreign-licensed companies’ home markets.Furthermore, the United States has been developing a proposal to restructure INTELSAT and a position paper on Inmarsat, both of which the United States believes will promote competition. Competitive Impact of Restructuring Depends Largely on Two Factors Currently, there is a wide belief among those that favor more competition, such as private satellite companies, and those that favor more flexibility for the treaty organizations, such as some signatories, that changes are necessary in the structure and functioning of INTELSAT and Inmarsat. Some satellite companies, for example, have questioned the continuing need for the two organizations and point out that some private competitors currently offer or will offer global coverage and provide significant services to the developing world. However, according to officials in the U.S. Department of State, most of the member governments and signatories, especially in the developing countries, are concerned that without the treaty organizations, their access to certain basic telecommunications services—including telephone and data services at reasonable rates—may be threatened. Most of the options for enhancing competition have focused on ways to reduce the barriers posed by the structure of INTELSAT and Inmarsat, while preserving some intergovernmental treaty structure. Key factors in developing options to promote competition are the number of new entities that are created and the extent of their ties to the parent organization or its owners. Some Changes May Be Warranted in the Way INTELSAT and Inmarsat Function When INTELSAT was formed, satellites were an efficient way to provide basic telephone services worldwide. However, since the mid-1980s, there has been a dramatic increase in the capacity and capabilities of transoceanic fiber optic cables, which can transmit telephone calls.INTELSAT’s share of international telephone service, traditionally the organization’s prime service, has fallen significantly in places where such cables are now available. In response, INTELSAT has focused more intensely on providing other, more technologically advanced services, such as broadcast video. In these growing markets, it faces some competition from new satellite-based companies as well as from many domestic and regional satellite systems that provide services in specific areas. Some of these current providers, and others who hope to enter the market, believe change is needed. They allege that their ability to thrive in the market and to bring more services and lower prices to consumers is limited because INTELSAT continues to dominate the market in some areas. A recent analysis by the U.S. Department of Justice found that in certain areas, INTELSAT currently dominates the market as a result of its large share of transoceanic satellite capacity and its signatories’ ability to keep other competitors out of their domestic markets. COMSAT, in contrast, believes that because of the cumulative effect of increased competition from fiber optic cables and regional and domestic satellite systems, as well as private satellite companies, INTELSAT no longer has substantial market dominance. Some of INTELSAT’s signatories believe that INTELSAT, with its lengthy decision-making process, is not well suited to adapt to new technologies and changing market conditions. As a result, these signatories are also interested in some restructuring of the organization. In a desire to expand into new markets, Inmarsat chose to establish ICO to develop and provide satellite services for handheld mobile telephones because, according to U.S. State Department officials, some members believed that an affiliate unencumbered by the structure of a treaty organization could respond to the changing market more effectively. Several U.S. firms have been licensed by the FCC to provide similar services and are seeking access to foreign countries’ markets and the licenses necessary to provide these services globally. But these potential competitors are concerned that Inmarsat’s close relationship to ICO could hinder the development of competition. Some Have Questioned the Continuing Need for the Treaty Organizations, While Others Prefer the Status Quo Because of advances in technology since INTELSAT and Inmarsat were formed and an increase in demand, the private market is capable of and willing to provide many of these services, such as video, data, and mobile telephone services. Many privately financed companies have begun to or would like to provide traditional and advanced services at lower prices. Many companies told us that these services will be available globally and that some companies are particularly targeting their marketing of mobile services to developing countries, which are less likely to have established traditional telephone services. Furthermore, they note that some INTELSAT competitors are currently providing services to the developing world. As a result, some industry and other policy analysts have questioned the continuing need for INTELSAT and Inmarsat. Despite the emergence of private competitors, however, some of the member nations believe that at least some aspects of the treaty organizations continue to be needed. According to officials in the U.S. Department of State, many nations believe that it may be desirable to retain some residual form of Inmarsat or another intergovernmental mechanism to ensure that services related to safety and rescue at sea continue to be provided. While the private market is more likely to be capable of providing all the services offered by INTELSAT, a State Department official told us that retaining some residual form of INTELSAT is important, at least for the foreseeable future. In particular, some of the developing countries, which consider that the treaty organizations have provided them with essential services, often do not believe that the private market would provide them with these services. These countries tend to be the most resistant to changing the function or structure of INTELSAT and Inmarsat. A Treasury Department official noted, however, that no country, including a developing country, has ever been refused service by a private company and that each country has the choice of permitting or not permitting service by private companies as well as by the treaty organizations. Additionally, some U.S. companies have expressed concern that a restructuring that does not address barriers to competition could result in worse conditions for potential new competitors. Conditions could worsen, for example, if any new affiliates of the treaty organizations gain the flexibility to become strong competitors while barriers to competition by other firms remain in place. Some firms have stated that they would prefer to leave the treaty organizations as they are rather than restructure them in a way that does not correct the barriers to competition they impose under the current system. Options for Restructuring to Enhance Competition As an alternative to abolishing INTELSAT and Inmarsat, other changes could be made to enhance competition while preserving some of the treaty organizations’ structure. For example, some portion of the satellite facilities of each of the two organizations could be privatized. For any option to enhance competition by reducing some of the barriers to competition, it must address the fundamental competitive problems of the present structure, such as (1) the incentives for the signatories to favor any newly created affiliates, in which they have an ownership interest, over other potential competitors for access to their domestic markets; (2) the potential dominance of the market by either a residual treaty organization or any resulting new entities; and (3) the advantages, such as tax privileges, immunity from lawsuits, and easier access to orbital slots and spectrum, currently enjoyed by INTELSAT and Inmarsat. Key to restructuring the treaty organizations with a view to enhancing competition are the number of new entities created and the degree to which they maintain economic ties with the remaining parent organization or its owners. According to economic principles, creating the largest feasible number of new entities may be best from the standpoint of encouraging competition, particularly if domestic and regional satellite systems do not provide adequate competition to INTELSAT in the global market. Additionally, because INTELSAT, in particular, has benefited from its advantages for many years and dominates some markets, restructuring would optimally remove enough of this organization’s assets, such as satellites and associated facilities, to reduce its dominance and to ensure that any newly created entities would not dominate the market. However, the costs of satellite technology may require that a firm have a significant amount of assets in order to be efficient and survive. Although there is no clear agreement on the smallest size a global satellite firm can be and remain efficient, the number of new entities that can be created and sustained is limited by these size considerations. The second important factor in restructuring concerns the economic and cultural ties between the residual treaty organization and any new affiliates. Economic principles suggest that to encourage competition, it would be best to minimize the relationship between these entities. If the economic ties between a residual treaty organization and the entities that are spun off from it are strong, the barriers to competition could be exacerbated. Thus, restructuring would likely result in more competition if the treaty organization or its signatories (1) have little, if any, financial stake in or continuing business relationship with any new entities and (2) have no mutual members of the boards of directors. Also, competition is more likely to be enhanced if the parent organization and its signatories have minimal control during any transition period from the current status to a new status. Among other things, if control is kept at a minimum, the member governments and signatories would have little incentive to favor the new entities over other competitors in their domestic markets. Even if the economic relationship between the parent and any affiliates is appropriately broken, there is concern that it could take some time for governments and signatories to provide open access to their markets because they are used to dealing mostly with the treaty organizations and may continue to wish to do so. Inmarsat’s Owners May Have Incentives to Aid ICO When Inmarsat created ICO, it provided an example of how a treaty organization could restructure by forming a single affiliate whose ownership was primarily restricted to the parent organization and its signatories. That approach to restructuring may not enhance competition because of the shared ownership arrangement between the parent and affiliate. Inmarsat and its signatories have both the incentives and the ability to provide ICO with market advantages over its potential competitors. These advantages may include access to member countries’ markets and financial benefits, such as more readily available financing. The United States supported the formation of ICO on condition that its structure include certain principles that favor competition. Inmarsat’s member countries agreed to many of these principles. Some U.S. officials have been concerned that ICO’s organizing documents do not incorporate the Inmarsat-approved principles in a way that binds ICO to applying them, and they are working toward ensuring that those principles are incorporated. Also, Inmarsat is considering a restructuring proposal that raises concerns among potential competitors about the competitive impact of the relationship of a privatized Inmarsat to ICO. ICO’s Owners Control Essential Access to Markets Inmarsat and those signatories that chose to invest directly in ICO hold a majority interest and thus have a significant vested interest in the organization’s financial success because they share in ICO’s profits. The initial sale of ICO’s shares, which was open only to Inmarsat and its signatories, raised a total of $1.4 billion. Inmarsat’s portion of that total amounts to about 10.6 percent of the voting shares. Nearly 60 percent of Inmarsat’s 79 signatories took advantage of the opportunity to invest directly in ICO and collectively hold well over 70 percent of ICO’s voting shares. A public offering may occur in the future, but external investment, which is authorized only at the discretion of ICO’s Board of Directors, is limited to 30 percent of the voting shares. As of June 1996, there was one external investor—the builder of new satellites for ICO—who currently holds a very small percentage of ICO’s shares. As noted earlier, Inmarsat’s signatories are typically the government authorities or dominant telecommunications providers that control or influence access to their domestic telecommunications markets. Market access is essential for the success of any provider of global satellite services. With their ownership interest in ICO, these signatories may have the incentive to grant such access to ICO and to preclude or inhibit access to other competitors, even though the competitors might offer services at lower prices. Moreover, as an official of the U.S. Department of Commerce noted, Inmarsat’s signatories had an incentive to invest through ICO because of the 17-percent rate of return they would earn on their investment, even though ICO has not yet generated revenues. ICO’s Ownership Structure May Confer Financial Advantages ICO’s shared ownership with Inmarsat may make financing more readily available to ICO than it is to competitors. It has also raised concerns that prohibited cross-subsidies may occur. Ownership by Inmarsat and its signatories may give ICO financial benefits in the form of more readily available financing than potential competitors are likely to enjoy. For example, ICO could find it easier to obtain future commercial financing than other satellite companies do because of the implicit government backing associated with its ownership. Furthermore, since the signatories are typically government agencies or government-sanctioned monopolies, they may have financial assets readily available for investment in ICO. Cross-subsidies could give ICO a financial advantage in competing with other companies by allowing ICO to offer lower prices than it could otherwise afford. Although cross-subsidization was prohibited when Inmarsat’s member countries authorized the formation of ICO, the shared ownership of Inmarsat and ICO raises the risk that cross-subsidies could occur. For example, the U.S. Departments of State and Commerce, in a September 1995 letter to the FCC, expressed concerns about whether contractual arrangements between Inmarsat and ICO were conducted with sufficient independence to ensure that there was no cross-subsidization. Because of Inmarsat’s protected status as an international treaty organization, the existence of cross-subsidies might be difficult to confirm because Inmarsat has immunity from prosecution under antitrust complaints and other lawsuits. However, it is not clear whether any of the protections Inmarsat enjoys would apply to its business transactions with ICO. U.S. Approval of ICO Was Conditional The United States agreed to the formation of ICO on condition that several principles of structural separation be met to promote fair competition for both ICO and the other companies that want to offer the same kinds of services. Those principles included (1) nondiscriminatory access to the countries’ domestic markets for all mobile satellite communications networks, (2) no transfer of spectrum or orbital slots from Inmarsat to ICO, (3) no cross-subsidies from Inmarsat, and (4) no transfer of treaty-based privileges and immunities to ICO. In December 1994, Inmarsat’s member governments agreed to the formation of ICO if certain conditions were met; those conditions incorporated many but not all of the principles the United States had sought to include in order to ensure structural separation. In their September 1995 letter to the FCC, the Departments of State and Commerce concluded that ICO’s organizing documents did not fully incorporate the conditions that Inmarsat’s members had agreed to. State and Commerce asked the FCC to delay authorization of COMSAT’s share of Inmarsat’s investment in ICO until it is clear that ICO is bound by the principles Inmarsat adopted. They also requested that COMSAT (1) state on the record that ICO is bound by the principles approved by Inmarsat and (2) provide supporting documentation. In its comments on a draft of this report, COMSAT said that it had reported to the relevant U.S. government agencies in late May 1996 that at ICO’s annual meeting on May 28, 1996, the shareholders approved an amendment to ICO’s organizing documents that fully incorporates these principles. COMSAT stated that it expects to provide the supporting documentation in the near future. Inmarsat’s Restructuring Raises Concerns About Future Relationship With ICO Inmarsat is reviewing proposals to restructure so that it may respond to commercial opportunities more readily than its members feel its treaty structure now allows. Under one proposal, Inmarsat would be devolved into a privately owned international public corporation. According to Inmarsat officials, the current version of that proposal would transfer all of Inmarsat’s satellites to the new corporation, while a smaller intergovernmental organization with more limited responsibilities would be retained to ensure the provision of services related to safety and rescue at sea. The relationship that ICO will have to a restructured Inmarsat is of concern to some potential competitors. Inmarsat is on record as being interested in the possibility of a future merger of ICO with a restructured Inmarsat. However, COMSAT, the U.S. signatory, stated that it has recently confirmed to the executive branch of the U.S. government that it does not support such a merger in any foreseeable time frame and that it considers such a merger highly unlikely because the business plans of ICO and a restructured Inmarsat differ. Ownership ties between ICO and a largely privatized Inmarsat could create a company with significant advantages in the market that would be free of any of the decision-making or operational burdens imposed by an intergovernmental structure. Such ownership ties might reinforce the incentives of Inmarsat’s signatories to open their domestic markets to ICO and the reorganized Inmarsat but not necessarily to potential competitors. Recent Proposals for Restructuring INTELSAT Two proposals for restructuring INTELSAT provide examples of options that retain a residual treaty organization while distributing portions of INTELSAT’s assets to one or more entities that are able to compete more freely in the market. Other countries have also made suggestions for change. To help ensure that any restructuring of INTELSAT would improve competition, the U.S. government has developed a proposal that would separate INTELSAT into two entities—a residual intergovernmental entity and a new affiliate. The affiliate would focus on providing more advanced services and would be owned primarily by private investors. Another proposal, which has been supported by a coalition of several U.S. satellite companies, calls for separating INTELSAT into at least three entities: a residual intergovernmental entity and at least two affiliates. The degree to which either of these proposals can help to enhance competition depends largely on whether it can (1) encourage other countries to open their markets to new entrants and (2) diminish the large share of transoceanic capacity that INTELSAT currently holds. The United States Has Proposed Restructuring INTELSAT With Inmarsat’s establishment of ICO as a backdrop, the United States has developed a proposal to restructure INTELSAT in order to ensure continued services worldwide at nondiscriminatory prices and to provide a more competitive marketplace. The key features of the proposal, aimed at reducing INTELSAT’s dominant market position and reducing the signatories’ incentive to favor the newly created affiliate over other companies in their domestic markets, include the following: INTELSAT would be separated into two companies, each of which would receive about half of the satellites. The residual INTELSAT is intended to focus on traditional services, such as basic telephone service, while the affiliate is intended to focus on newer services, such as video broadcast. After a transition period of about 2-3 years, fully 80 percent of the affiliate would be owned by interests other than INTELSAT or its signatories. The proposal requires that (1) the affiliate not have any privileges and immunities, (2) business transactions between the two companies take place as if the entities had no economic relationship, (3) the affiliate be subject to competition laws in the countries in which it operates, and (4) no special access to orbital slots be available to the affiliate. From a competitive standpoint, separating INTELSAT into two companies is designed, in part, to reduce the size of the resulting entities relative to other competitors. Currently, INTELSAT has 24 satellites in space and 7 empty orbital slots. The affiliate, with which other competitors would most directly compete, would have about half of the INTELSAT satellites. In comparison, one private competitor, PanAmSat, plans to grow to an eight-satellite operation within a few years. Moreover, since INTELSAT’s signatories would be able to own, together, only 20 percent of the affiliate, the proposal is designed to reduce their financial incentive, as the telecommunications authorities in their own countries, to favor INTELSAT’s affiliate over other new entrants when making decisions about access to their domestic markets. Officials of several of the federal agencies that helped develop this proposal acknowledged that other options might have done more to promote competition, but they did not believe that other countries would have supported such options. In particular, State Department officials told us that INTELSAT members were unlikely to accept an option that resulted in the formation of more than one new affiliate or an affiliate with ownership by the signatories of less than 20 percent. They also said that because of these concerns, the proposal they put forth is likely to be the most competitively oriented proposal acceptable to INTELSAT’s member governments and signatories. U.S. Satellite Coalition Has Suggested an Alternative Restructuring Design A coalition of several U.S. satellite companies that had expressed interest in privatizing the treaty organizations entirely has more recently put forth a proposal for an alternative restructuring design. As with the U.S. proposal on INTELSAT, this proposal requires that any new affiliate gain no privileges or immunities or any other economic benefits from its relationship with INTELSAT. Under the proposal, INTELSAT would be separated into at least three parts, including at least two affiliates that are each owned at least 50 percent by entities other than INTELSAT or its signatories. Additionally, each signatory would be able to invest in one or the other affiliate, but not in both. Proponents of this proposal believe that such an option would reduce concerns about market domination more than the U.S. proposal does because each resulting entity would be smaller than it would under the U.S. proposal. Moreover, some market observers have suggested that under this option, countries’ telecommunications authorities may align themselves with one of the affiliates. Signatories may find that to do business with certain other countries, they may have to allow both affiliates to serve their domestic markets. Competition Would Be Enhanced by More Affiliates and Reduced Ownership by INTELSAT’s Signatories The degree to which either of these proposals can help to enhance competition depends largely on whether it can encourage other countries to open their markets to new entrants. As discussed earlier, competition can be enhanced by creating more entities out of INTELSAT as long as each is technically and economically viable on its own. In this regard, the industry’s proposal may be more likely to reduce the potential for INTELSAT or the new affiliates to dominate the market because each entity would be smaller in size. Having more affiliates may also help to reduce the incentive that countries’ telecommunications authorities may have to favor INTELSAT over other competitors. Some analysts believe that if countries open their markets to the two competitors envisioned under the proposal, those countries may then be more likely to open their markets to private competitors. As noted earlier, it is best for competition for the treaty organization and its owners to have little, if any, financial stake in, or continuing business relationship with any new entities. The U.S. proposal may come closer to reaching this goal because it allows the signatories to own only 20 percent of the new entity, while the industry’s proposal allows the signatories to own up to 50 percent of one of the new entities. However, another aspect of the industry’s proposal mitigates the effects of the higher level of ownership by the signatories: that is, the requirement that each signatory invest in only one or the other new affiliate. Signatories may find that to do business with certain other countries, they will have to allow entry into their domestic markets by the INTELSAT affiliate in which they have not invested, and the need to allow both affiliates into their markets may induce countries to widen access to other entrants. Even the lower 20-percent ownership level proposed by the United States may not be enough to ensure that INTELSAT’s signatories have little influence over the new affiliate. Several U.S. regulations regarding ownership levels indicate that potential control or significant influence may occur at lower levels of ownership, such as 10-20 percent. As such, the group that developed the U.S. proposal stated that the 20-percent limit on the amount of the affiliate that the signatories could own was an important upper limit to ensure that INTELSAT and its signatories have minimal influence on any new entities created. Conclusions The treaty organizations have benefited from their intergovernmental status and a variety of advantages designed to help ensure their success in achieving worldwide satellite communications. However, advances in technology and increases in demand have transformed the industry into one that may provide profitable business opportunities for private firms. Having achieved their original missions, the treaty organizations, as structured, may now be impeding the flourishing of a private market and the benefits it can bring to consumers. Making changes to the present structure of the treaty organizations could be difficult because doing so would likely depend on achieving consensus among member nations around the world that have a broad range of perspectives and interests. Along with a goal of ensuring continued global service, a primary interest of the United States is the promotion of competition, which could provide many new options for international satellite services. Many other members of the treaty organizations are concerned about guaranteeing the availability of the basic services now provided by each of the treaty organizations and thus may not be supportive of the kinds of changes that would most advance competition. Over time, however, consumers worldwide would benefit from increased competition in the marketplace. Agency Comments We provided copies of a draft of this report for review and comment to the National Economic Council and the Office of Science and Technology Policy in the Executive Office of the President; the Departments of State, Commerce, Justice, and the Treasury; COMSAT, the U.S. signatory to the treaty organizations, through the U.S. Department of State; the FCC; and the Alliance for Competitive International Satellite Services (ACISS), a coalition of private satellite providers. The draft was also reviewed by a representative of the Council of Economic Advisors. Executive Branch, FCC, and ACISS representatives generally agreed with the report’s findings and balance and provided us with several clarifications and more current information, which we have incorporated as appropriate. In written comments, which are presented in full in appendix I, ACISS commended the report for its balanced and thorough treatment of the complex issues surrounding the proposed privatization of Inmarsat and INTELSAT. COMSAT also provided written comments on our draft report. COMSAT officials were concerned that they had provided us with a variety of information that we did not include in our report. They also stated that we did not accurately characterize the nature of the competition facing INTELSAT in the international communications market and that we had focused our discussion of certain restructuring proposals solely on their competitive effect, to the exclusion of other important issues. We used documents obtained from COMSAT and a variety of other sources as background information in the preparation of this report. Because this report is an overview of issues related to the competitive structure of the international satellite market, we did not think that all of the documents provided by COMSAT contained information necessary for the report. Our report also clearly discusses the nature of the competition facing INTELSAT. COMSAT is correct in saying that this report focuses on the potential competitive impacts of various approaches for restructuring the treaty organizations; that is a goal of the U.S. proposal and is the issue we were requested to review. In response to COMSAT’s concern, we have noted other goals of the U.S. proposal in the report. COMSAT’s complete comments and our detailed responses to them are presented in appendix II. Scope and Methodology This report is based on our analysis and our review of documents and other information obtained from the National Economic Council and the Office of Science and Technology Policy in the Executive Office of the President; the FCC; the Departments of State, Commerce, Justice, and the Treasury; COMSAT; INTELSAT; and Inmarsat. We also obtained information from experts from the Council of Economic Advisors and from ACISS as well as from representatives of several companies operating, licensed to operate, or applying for licensing to establish their own satellite systems. We conducted our work from May through June 1996 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested congressional committees; the Chairman of the National Economic Council; the Director of the Office of Science and Technology Policy; the Chairman of the FCC; the Secretaries of State, Commerce, Justice, and the Treasury; and the Chairman of the Board of COMSAT. We will also make copies available to others upon request. Please call me at (202) 512-2834 if you or your staff have any questions. Major contributors to this report are listed in appendix III. Comments From ACISS Comments From COMSAT The following are GAO’s comments on COMSAT’s letter dated June 27, 1996. GAO’s Comments 1. We believe that our report provides a broad overview of the establishment of INTELSAT. As our report notes, the Communications Satellite Act of 1962 states that a policy of the United States was to establish a global system in conjunction and in cooperation with other countries, and one of the objectives for such a system was to contribute to world peace and understanding. The statute also authorized COMSAT to “plan, initiate, construct, own, manage, and operate itself or in conjunction with foreign governments or business entities a commercial communications satellite system . . . .” 2. Our description of the types of services that INTELSAT was formed to provide does not imply that INTELSAT was precluded from providing other kinds of services including video broadcast. At the time the treaty organization was formed, however, the bulk of the services provided could be expected to be basic telephone and data services. 3. The U.S. proposal for the restructuring of INTELSAT has several objectives, and we have added references to other objectives to clarify that point. However, we were asked to review only the potential competitive impact of different kinds of restructuring approaches. In that context, we reviewed the U.S. proposal as one example of the various ways in which restructuring could occur. 4. Other than quoting COMSAT, our report makes no reference to unfair competitive advantage in the marketplace. Our discussion of possible competitive advantages focuses on the treaty organizations, not the signatories. Furthermore, we state only that these factors may be, or may contribute to, potential competitive advantages. 5. We have clarified the issue of the treaty organizations’ access to scarce orbital slots and spectrum by explaining in our report their comparative ease of access rather than characterizing their access as preferential. The treaty organizations, like private companies, must coordinate access to fixed orbital locations and spectrum through the International Telecommunications Union (ITU). However, according to the Federal Communications Commission (FCC), which processes INTELSAT’s applications, submission of the applications through the host country is a formality, and applications are forwarded to the ITU automatically. (INTELSAT is headquartered in the United States; Inmarsat’s applications are processed through the United Kingdom, where that organization is headquartered.) The applications of private U.S. companies, on the other hand, are subject to FCC’s review and approval before being submitted to the ITU. 6. We agree that INTELSAT is changing its coordination requirements and will ultimately eliminate the requirement for economic coordination set out in Article XIV(d) of the treaty agreement, and we have added a discussion of that issue in the report. However, the treaty agreement has not yet been amended to eliminate this requirement. Furthermore, not only is there no effort to eliminate the requirement for technical coordination, but, according to FCC officials, a recent INTELSAT vote to deny a U.S. company successful technical coordination was based on criteria not previously applied to technical coordinations. Also, satellite companies we spoke with that have undergone the coordination processes told us that they consider the information they had to submit for this process to be sensitive and proprietary. 7. Our report clearly points out that INTELSAT has faced increasing competition from fiber optic cables. However, fiber cables are not available to all countries, nor are they able to provide certain types of services. Moreover, because other signatories (not COMSAT) to INTELSAT and Inmarsat are often part owners of fiber cables, INTELSAT’s reduced market share in the markets where it competes with fiber may not necessarily imply that strong price competition has emerged. 8. COMSAT has concerns about our characterization of competition for advanced services as “limited” and points out that there are many domestic and regional satellite systems providing services within their coverage area. We agree that there are domestic and regional satellite competitors and have so noted in our report. There is some disagreement among analysts regarding the degree to which these systems provide meaningful competition on a global basis. In particular, some of these systems are owned by signatories to INTELSAT, reducing the likelihood that they would provide significant pressure on pricing. We have deleted the word “limited” from that section of the report. We believe that in time there will be more fully global providers. For example, currently Orion and Columbia are less than global providers, but these firms plan to expand their systems’ coverage. Additionally, several U.S. domestic satellite providers are expected to expand their systems’ coverage globally in the future. 9. COMSAT’s point that the signatories to INTELSAT often are part-owners of regional or domestic systems would seem to reinforce our belief that such systems may not represent additional competition, since their overlapping ownership may reduce the likelihood that they would compete significantly against one another. 10. Our report does point out some of the difficulties that INTELSAT faces that its competitors do not have. For example, our report notes that according to U.S. agency officials, some members of INTELSAT believe it may not be able to respond easily to changes in the market as a result of the difficult decision-making process entailed in the intergovernmental structure. 11. While restructuring these organizations may not solve all of the problems of market access, restructuring in a way that lessens the incentives of foreign governments to favor the treaty organizations may have an impact on market access. Moreover, as we note in the report, the United States is engaged in a number of activities aimed at encouraging open access. 12. As COMSAT notes, separate satellite providers such as PanAmSat have been able to get access in many countries. However, to be a full global provider, a company may need to gain access into nearly all countries to provide many different types of services. Companies told us that they want to provide global coverage, and some are closer to gaining the necessary access than others. Some of the companies that COMSAT mentions as having gained access to many countries are not yet providing services, and it is not clear what level of access they will achieve. Since some countries do limit entry, particularly regarding the provision of certain types of services (most notably basic telephone service), concerns about access remain. Moreover, even if a company may eventually gain significant access, a continuing concern would be the time, effort, and expense that it may take that company to do so. 13. We acknowledge the receipt of new information on ICO’s recent efforts to work cooperatively with U.S. Big LEO (low-earth orbiting satellite systems, which are emerging to provide mobile services) companies on cooperation on creating a competitive regulatory environment. We believe that this recent development is ancillary to the focus of restructuring the treaty organizations and therefore have not added this information to our report. 14. In discussing the options for restructuring, our report speaks generally about certain issues and specifically about two proposals. As agreed with our requester, we focused on the U.S. and the industry proposals and did not attempt to provide a census of the array of proposals for restructuring INTELSAT. 15. Our request was to examine issues related to restructuring with regard to how competition would be affected. Nevertheless, we have clarified in the report that the U.S. proposal has additional goals. 16. We have revised the report to read that the affiliate will have roughly half of the INTELSAT satellites. 17. The draft named the companies that are part of the coalition in a footnote. 18. With regard to access to capital, both INTELSAT and Inmarsat contain provisions in their operating agreements requiring the signatories to provide capital as decided by the respective signatory decision-making bodies within the treaty organizations. Furthermore, as noted in our report, the affiliation of the treaty organizations with governments may make any public financing they seek easier to get than similar public investment in companies that cannot provide implicit governmental support. While COMSAT states that two potential competitors have now acquired portions of their needed funding, according to a representative of the industry coalition, companies that want to compete with INTELSAT and Inmarsat, including the two cited by COMSAT, have had difficulty in obtaining the needed level of financing through both public offerings and efforts to secure debt financing. 19. With regard to the incorporation of U.S.-proposed principles into ICO’s formation, the Departments of State and Commerce said in their September 1995 letter to the FCC that the principles accepted by Inmarsat’s member governments included “incorporation of many but not all of the U.S. Party’s principles and structural separation elements.” State and Commerce concluded in that letter that ICO’s organizing documents did not fully conform with the requirements of Inmarsat’s member governments and recommended that the FCC not approve COMSAT’s application until COMSAT states on the record that ICO is also bound by these principles and provides supporting documentation. We have updated our report to reflect recent developments on this issue. As COMSAT notes, however, the supporting documentation has yet to be provided to the Executive Branch of the U.S. government. 20. We have updated our report to include information on events that occurred after our draft report had been distributed for review and comment, including COMSAT’s confirmation to the Executive Branch of the U.S. government that it does not support a merger between ICO and a restructured Inmarsat in any foreseeable time frame. Major Contributors to This Report Resources, Community, and Economic Development Division Office of the Chief Economist Joseph Kile The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO provided information on the potential competitive impact of restructuring the international satellite organizations, focusing on: (1) possible alternative approaches to enhance market access; (2) the affiliate created by the International Maritime Satellite Organization (Inmarsat) to provide new services; and (3) proposals for restructuring the International Telecommunications Satellite Organization (INTELSAT). GAO found that: (1) there is widespread belief that changes are necessary in the structure and functions of INTELSAT and Inmarsat; (2) many satellite companies believe that private competitors already offer or will offer better global services at lower rates to the developing world; (3) most member governments and signatories in developing countries are concerned that their access to certain basic telecommunications services may be threatened without the presence of treaty organizations; (4) although most of the options for restructuring the organizations favor enhancing competition by dismantling the organizations or creating affiliates, the competitive impact of these options depends on how these organizations are structured, the number of entities created, and the degree of parent organization ownership; (5) the new Inmarsat affiliate may have competitive advantages over other potential competitors; (6) although the United States originally supported the creation of the Inmarsat affiliate, the United States and its signatory are pursuing action to ensure that it adheres to certain principles that favor competition; (7) restructuring INTELSAT into two entities and limiting the amount of signatory ownership in those entities to 20 percent could improve the competitiveness of the global telecommunications market; (8) another proposal supported by a coalition of U.S. satellite firms favors establishing two new private companies in addition to a scaled-down parent organization; (9) the effect on competition of either proposal depends on whether INTELSAT market dominance can be reduced and new companies are allowed to gain entrance into foreign telecommunications markets; and (10) changing the present structure of the treaty organizations would likely depend on reaching a consensus among world member nations that have broad perspectives and interests.
Internal Control Weaknesses in DHS’s Biometric Transportation ID Program Hinder Efforts to Ensure Security Objectives Are Fully Achieved DHS has established a system of TWIC-related processes and controls. However, internal control weaknesses governing the enrollment, background checking, and use of TWIC potentially limit the program’s ability to meet the program’s stated mission needs or provide reasonable assurance that access to secure areas of MTSA-regulated facilities is restricted to qualified individuals. Specifically, internal controls in the enrollment and background checking processes are not designed to provide reasonable assurance that (1) only qualified individuals can acquire TWICs; (2) adjudicators follow a process with clear criteria for applying discretionary authority when applicants are found to have extensive criminal convictions; or (3) once issued a TWIC, TWIC holders have maintained their eligibility. To meet the stated program purpose, TSA’s focus in designing the TWIC program was on facilitating the issuance of TWICs to maritime workers. However, TSA did not assess the internal controls in place to determine whether they provided reasonable assurance that the program could meet defined mission needs for limiting access to only qualified individuals. For example, controls that the TWIC program has in place to identify the use of potentially counterfeit identity documents are not used to routinely inform background checking processes. Additionally, controls are not in place to determine whether an applicant has a need for a TWIC. For example, regulations governing the TWIC program security threat assessments require applicants to disclose their job description and location(s) where they will most likely require unescorted access, if known, among other things. However, TSA enrollment processes do not require that this information be provided by applicants. In addition, TWIC program controls are not designed to require that adjudicators follow a process with clear criteria for applying discretionary authority when applicants are found to have extensive criminal convictions. Being convicted of a felony does not automatically disqualify a person from being eligible to receive a TWIC; however, prior convictions for certain crimes are automatically disqualifying. For example, offenses such as espionage or treason would permanently disqualify an individual from obtaining a TWIC. Other offenses, such as murder or the unlawful possession of an explosive device, while categorized as permanent disqualifiers, are also eligible for a waiver under TSA regulations. These offenses might not permanently disqualify an individual from obtaining a TWIC if TSA determines that an applicant does not represent a security threat. As of September 8, 2010, the agency reported 460,786 cases where the applicant was approved, but had a criminal record based on the results from the FBI. This represents approximately 27 percent of individuals approved for a TWIC at the time. Although TSA has the discretion and authority to consider the totality of an individual’s criminal record, including the existence of (1) extensive criminal convictions, (2) criminal offenses not defined as a permanent or interim disqualifying criminal offense, such as theft or larceny, and (3) certain periods of imprisonment, TSA has not developed a definition for what extensive foreign or domestic criminal convictions means, or developed guidance to ensure that adjudicators apply this authority consistently. In commenting on our report, DHS concurred with our related recommendation, and consequently may address this weakness as part of its efforts to correct internal control weaknesses in the TWIC program. Further, TWIC program controls are not designed to provide reasonable assurance that TWIC holders have maintained their eligibility once issued TWICs. For example, controls are not designed to determine whether TWIC holders have committed disqualifying crimes at the federal or state level after being granted a TWIC. Although existing policies may hamper TSA’s ability to check FBI-held fingerprint-based criminal history records for the TWIC program on an ongoing basis after TWIC issuance, TSA has not explored alternatives for addressing this weakness, such as informing facility and port operators of this weakness and identifying solutions for leveraging existing state criminal history information, where available. In addition, controls are not designed to provide reasonable assurance that TWIC holders continue to meet immigration status eligibility requirements. For example, if a TWIC holder’s stated period of legal presence in the United States is about to expire or has expired, the TWIC program does not request or require proof from TWIC holders to show that they continue to maintain legal presence in the United States. Additionally, although it has regulatory authority to do so, the program does not issue TWICs for a term less than 5 years to match the expiration of a visa. Internal control weaknesses in TWIC enrollment, background checking, and use could have contributed to the breach of selected MTSA-regulated facilities during covert tests conducted by our investigators. During these tests at several selected ports, our investigators were successful in accessing ports using counterfeit TWICs, authentic TWICs acquired through fraudulent means, and false business cases (i.e., reasons for requesting access). Our investigators did not gain unescorted access to a port where a secondary port-specific identification was required in addition to the TWIC. TSA and Coast Guard officials stated that the TWIC card alone is not sufficient and that the cardholder is also required to present a business case. However, our covert tests demonstrated that having an authentic TWIC and a legitimate business case were not always required in practice. Prior to fielding the program, TSA did not conduct a risk assessment of the TWIC program to identify program risks and the need for controls to mitigate existing risks and weaknesses, as called for by internal control standards. Such an assessment could help provide reasonable assurance that control weaknesses in one area of the program do not undermine the reliability of other program areas or impede the program from meeting mission needs. TWIC program officials told us that control weaknesses were not addressed prior to initiating the TWIC program because they had not previously identified them, or because they would be too costly to address. However, as we noted in our report, officials did not provide (1) documentation to support their cost concerns and (2) did not complete an assessment of whether they needed to implement additional compensating controls or of the risks associated with not correcting for existing internal control weaknesses. In our May 2011 report, we recommended that the Secretary of Homeland Security perform an internal control assessment of the TWIC program by (1) analyzing existing controls, (2) identifying related weaknesses and risks, and (3) determining cost-effective actions needed to correct or compensate for those weaknesses so that reasonable assurance of meeting TWIC program objectives can be achieved. This assessment should consider weaknesses we identified in this report among other things. DHS officials concurred with our recommendation. TWIC’s Effectiveness at Enhancing Security Has Not Been Assessed, and the Coast Guard Lacks the Ability to Assess Trends in TWIC Compliance DHS asserted in its 2009 and 2010 budget submissions that the absence of the TWIC program would leave America’s critical maritime port facilities vulnerable to terrorist activities. However, to date, DHS has not assessed the effectiveness of TWIC at enhancing security or reducing risk for MTSA-regulated facilities and vessels. Further, DHS has not demonstrated that TWIC, as currently implemented and planned with card readers, is more effective than prior approaches used to limit access to ports and facilities, such as using facility-specific identity credentials with business cases. According to TSA and Coast Guard officials, because the program was mandated by Congress as part of MTSA, DHS did not conduct a risk assessment to identify and mitigate program risks prior to implementation. Further, according to these officials, neither the Coast Guard nor TSA analyzed the potential effectiveness of TWIC in reducing or mitigating security risk—either before or after implementation—because they were not required to do so by Congress. However, internal control weaknesses raise questions about the effectiveness of the TWIC program. Moreover, as we have previously reported, Congress also needs information on whether and in what respects a program is working well or poorly to support its oversight of agencies and their budgets, and agencies’ stakeholders need performance information to accurately judge program effectiveness. Therefore, we recommended in our May 2011 report that the Secretary of Homeland Security conduct an effectiveness assessment that includes addressing internal control weaknesses and, at a minimum, evaluates whether use of TWIC in its present form and planned use with readers would enhance the posture of security beyond efforts already in place given costs and program risks. DHS concurred with our recommendation. Further, executive branch requirements provide that prior to issuing a new regulation, agencies are to conduct a regulatory analysis, which is to include an assessment of costs, benefits, and risks. Therefore, DHS is required to issue a new regulatory analysis for its proposed regulation on the use of TWIC with biometric card readers. Conducting a regulatory analysis using the information from the internal control and effectiveness assessments could better inform the new regulatory analysis and could help DHS identify and assess the full costs and benefits of implementing the TWIC program. Therefore, in our May 2011 report, we recommended that the Secretary of Homeland Security use the information from the internal control and effectiveness assessments as the basis for evaluating the costs, benefits, security risks, and corrective actions needed to implement the TWIC program. This should be done in a manner that will meet stated mission needs and mitigate existing security risks as part of the regulatory analysis being completed for the new TWIC biometric card reader regulation. DHS concurred with our recommendation. Finally, the Coast Guard’s approach for monitoring and enforcing TWIC compliance nationwide could be improved by enhancing its collection and assessment of related maritime security information. For example, the Coast Guard tracks TWIC program compliance, but the processes involved in the collection, cataloguing, and querying of information cannot be relied on to produce the management information needed to assess trends in compliance with the TWIC program or associated vulnerabilities. The Coast Guard uses its Marine Information for Safety and Law Enforcement (MISLE) database to monitor activities related to MTSA-regulated facility and vessel oversight, including observations of TWIC-related deficiencies. Coast Guard officials reported that they are making enhancements to the MISLE database and plan to distribute updated guidance on how to collect and input information. However, as of May 2011, the Coast Guard had not yet set a date for implementing these changes. Further, these enhancements do not address all weaknesses identified in our report that hamper the Coast Guard’s efforts to conduct trend analysis of the deficiencies as part of its compliance reviews. Therefore, in our May 2011 report, we recommended that the Secretary of Homeland Security direct the Commandant of the Coast Guard to design effective methods for collecting, cataloguing, and querying TWIC-related compliance issues to provide the Coast Guard with the enforcement information needed to assess trends in compliance with the TWIC program and identify associated vulnerabilities. DHS concurred with our recommendation. As the TWIC program continues on the path to full implementation—with potentially billions of dollars needed to install TWIC card readers in thousands of the nation’s ports, facilities, and vessels at stake—it is important that Congress, program officials, and maritime industry stakeholders fully understand the program’s potential benefits and vulnerabilities, as well as the likely costs of addressing these potential vulnerabilities. The report we are releasing today aims to help inform stakeholder views on these issues. Chairman Rockefeller, Ranking Member Hutchison, and Members of the Committee, this concludes my prepared testimony. I look forward to answering any questions that you may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses credentialing issues associated with the security of U.S. transportation systems and facilities. Securing these systems requires balancing security to address potential threats while facilitating the flow of people and goods that are critical to the U.S. economy and international commerce. As we have previously reported, these systems and facilities are vulnerable and difficult to secure given their size, easy accessibility, large number of potential targets, and proximity to urban areas. The Maritime Transportation Security Act of 2002 (MTSA) required regulations preventing individuals from having unescorted access to secure areas of MTSA-regulated facilities and vessels unless they possess a biometric transportation security card and are authorized to be in such an area. MTSA further required that biometric transportation security cards be issued to eligible individuals unless determined that an applicant poses a security risk warranting denial of the card. The Transportation Worker Identification Credential (TWIC) program is designed to implement these biometric maritime security card requirements. The TWIC program, once implemented, aims to meet the following stated mission needs: (1) Positively identify authorized individuals who require unescorted access to secure areas of the nation's transportation system. (2) Determine the eligibility of individuals to be authorized unescorted access to secure areas of the transportation system by conducting a security threat assessment. (3) Ensure that unauthorized individuals are not able to defeat or otherwise compromise the access system in order to be granted permissions that have been assigned to an authorized individual. (4) Identify individuals who fail to maintain their eligibility requirements subsequent to being permitted unescorted access to secure areas of the nation's transportation system and immediately revoke the individual's permissions. Within the Department of Homeland Security (DHS), the Transportation Security Administration (TSA) and the U.S. Coast Guard are responsible for implementing and enforcing the TWIC program. In addition, DHS's Screening Coordination Office facilitates coordination among the various DHS components involved in TWIC. This testimony is based on a report we are releasing publicly today on the TWIC program. Like the report, this testimony discusses the extent to which: (1) TWIC processes for enrollment, background checking, and use are designed to provide reasonable assurance that unescorted access to secure areas of MTSA-regulated facilities and vessels is limited to qualified individuals, and (2) DHS has assessed the effectiveness of TWIC, and whether the Coast Guard has effective systems in place to measure compliance. DHS has established a system of TWIC-related processes and controls. However, internal control weaknesses governing the enrollment, background checking, and use of TWIC potentially limit the program's ability to meet the program's stated mission needs or provide reasonable assurance that access to secure areas of MTSA-regulated facilities is restricted to qualified individuals. Specifically, internal controls in the enrollment and background checking processes are not designed to provide reasonable assurance that (1) only qualified individuals can acquire TWICs; (2) adjudicators follow a process with clear criteria for applying discretionary authority when applicants are found to have extensive criminal convictions; or (3) once issued a TWIC, TWIC holders have maintained their eligibility. To meet the stated program purpose, TSA's focus in designing the TWIC program was on facilitating the issuance of TWICs to maritime workers. However, TSA did not assess the internal controls in place to determine whether they provided reasonable assurance that the program could meet defined mission needs for limiting access to only qualified individuals. For example, controls that the TWIC program has in place to identify the use of potentially counterfeit identity documents are not used to routinely inform background checking processes. Internal control weaknesses in TWIC enrollment, background checking, and use could have contributed to the breach of selected MTSA-regulated facilities during covert tests conducted by our investigators. During these tests at several selected ports, our investigators were successful in accessing ports using counterfeit TWICs, authentic TWICs acquired through fraudulent means, and false business cases (i.e., reasons for requesting access). DHS asserted in its 2009 and 2010 budget submissions that the absence of the TWIC program would leave America's critical maritime port facilities vulnerable to terrorist activities. However, to date, DHS has not assessed the effectiveness of TWIC at enhancing security or reducing risk for MTSA-regulated facilities and vessels. Further, DHS has not demonstrated that TWIC, as currently implemented and planned with card readers, is more effective than prior approaches used to limit access to ports and facilities, such as using facility-specific identity credentials with business cases. According to TSA and Coast Guard officials, because the program was mandated by Congress as part of MTSA, DHS did not conduct a risk assessment to identify and mitigate program risks prior to implementation. Further, according to these officials, neither the Coast Guard nor TSA analyzed the potential effectiveness of TWIC in reducing or mitigating security risk--either before or after implementation--because they were not required to do so by Congress. However, internal control weaknesses raise questions about the effectiveness of the TWIC program. Moreover, as we have previously reported, Congress also needs information on whether and in what respects a program is working well or poorly to support its oversight of agencies and their budgets, and agencies' stakeholders need performance information to accurately judge program effectiveness. Therefore, we recommended in our May 2011 report that the Secretary of Homeland Security conduct an effectiveness assessment that includes addressing internal control weaknesses and, at a minimum, evaluates whether use of TWIC in its present form and planned use with readers would enhance the posture of security beyond efforts already in place given costs and program risks. DHS concurred with our recommendation.
Background The creation of DHS is an historic opportunity for the federal government to fundamentally transform how the nation will protect itself from terrorism and other threats. Not since the creation of the Department of Defense in 1947 has the federal government undertaken an organizational merger of this magnitude. Enacted on November 25, 2002, the Homeland Security Act established DHS by merging 22 disparate agencies and organizations with multiple missions, values, and cultures. On March 1, 2003, DHS officially began operations as a new department. DHS is now the third largest federal government agency with an anticipated budget of $40.7 billion for fiscal year 2005 and an estimated 180,000 employees. In accordance with section 1502 of the Homeland Security Act, the President provided a DHS reorganization plan to appropriate congressional committees specifying the agencies that would integrate into DHS, along with an overall organizational structure, but the plan did not specify how the integration of these agencies and employees would occur. Section 701 of the Homeland Security Act gave the Under Secretary for Management at DHS the responsibility for the management and administration of the department, including the transition and reorganization process, among other things. As seen in figure 1, the Chief Financial Officer (CFO), the Chief Information Officer (CIO), the Chief Human Capital Officer (CHCO), the Chief Procurement Officer (CPO), and the Chief Administrative Officer (CAO) are all housed within the Management Directorate. Figure 1 shows the organizational structure of the department, as of December 2004. Selected Key Mergers and Transformation Practices Can Help Guide DHS in Taking a Comprehensive and Sustained Approach to its Management Integration Efforts DHS would have the comprehensive and sustained approach to its management integration efforts that it needs over the long term to successfully transform the agency, if it more closely adhered to three selected key practices that we have found consistently at the center of successful mergers, acquisitions, and transformations. Otherwise, the department runs the risk of not establishing and maintaining the management infrastructure needed to steer the integration of the department and ultimately to help meet its critical mission of protecting the homeland. We identified these key practices through a forum the Comptroller General convened in September 2002, as DHS was being created, to help DHS merge its various originating components into a unified department. The forum was designed to identify and discuss useful practices and lessons learned from major private and public sector organizational mergers, acquisitions, and transformations. In July 2003, we further identified implementation steps for the nine key practices raised at the forum. These key practices and implementation steps are shown in figure 2. To assess DHS’s progress to date in integrating its management functions, we determined that three of the nine practices were especially important to ensure the agency has the management infrastructure it needs this early in the process to manage and sustain its integration: (1) an overarching integration strategy, with implementation goals and a time line that links its various individual management integration initiatives; (2) a dedicated implementation team with the responsibility and authority to drive the department’s management integration; and (3) committed and sustained leadership. DHS has opportunities to more fully implement each of these practices and increase its ability to successfully integrate. DHS Has Issued Some Guidance and Plans to Help Its Management Integration Efforts, But Needs an Overarching Strategy to Integrate Across Management Functions and to Identify Critical Interdependencies, Interim Milestones, and Possible Efficiencies We have reported that a merger or transformation is a substantial commitment that could take years before it is completed, and therefore must be carefully and closely managed and monitored to achieve success. Establishing implementation goals and a time line is critical to ensuring success, as well as pinpointing performance shortfalls and gaps and suggesting midcourse corrections. Such goals and time lines could be contained in an overall integration plan for a merger or transformation effort. It is important to note that such a plan typically goes beyond what is contained in an agency strategic plan, and provides more specific operational and tactical information to manage a sustained effort. For example, as required by the Government Performance and Results Act of 1993, a strategic plan generally contains the high-level goals and mission for an agency based on its statutory requirements, while an integration strategy would provide the activities and time lines needed, along with assigned responsibilities, for accomplishing the goals of an organizational merger or transformation. Finally, another element essential to executing a merger or transformation is to make the implementation goals and time lines public, so that employees, customers, and stakeholders are aware of what is to be accomplished and when. Our prior work shows that DHS needed to carefully plan and manage its integration, and a study commissioned by DHS underscored that the department should use an overall integration strategy to help accomplish this. For example, prior to the establishment of the department, we identified a number of management challenges that DHS might face as it moved forward in its integration, such as the establishment of a comprehensive planning and management focus and the need for a results- oriented approach to ensure accountability and sustainability. In December 2002, we recommended that careful and thorough transition planning would be critical to the successful creation of DHS and that the importance of the transition efforts to implement the new homeland security could not be overemphasized. Specifically, we recommended to OMB that in developing an effective transition plan for DHS, it should ensure that the plan incorporates the key practices we identified as being found at the center of successful mergers and transformations. In July 2004, we reported on the merger of the Federal Protective Service (FPS) into DHS and recommended that FPS develop an overall transformation strategy for how it will carry out its expanding mission, as well as meet other challenges it faces. DHS agreed with our recommendation. Moreover, in early 2003, DHS recognized the challenges it faced and commissioned a comprehensive management study to help the department create an operating structure that integrates the department’s components and to facilitate a DHS-wide integration plan linked to core missions and capabilities, among other things. This management study also recommended that DHS develop a comprehensive integration plan with major milestones defined, encompassing all of the department’s integration initiatives including functional management and mission integration activities. Early on, the department made some progress in consolidating the processes and systems of each individual function in areas such as information technology, financial management, procurement, and human capital. For example, according to DHS’s performance and accountability report for fiscal year 2004 and updated information provided by DHS officials, the department has accomplished the following activities as part of its integration efforts: reduced the number of financial management service centers from consolidated acquisition support for 22 legacy agencies within 8 major procurement programs, reduced the number of its payroll systems from 8 to 2, and expects to be using one single payroll system by the beginning of fiscal year 2006, consolidated 22 different human resource offices to 7, consolidated 271 processes associated with administrative services consolidated bank card programs from 27 to 3, and realigned more than 6,000 support services employees (both government and contractor) from the legacy U.S. Customs Service and the legacy Immigration and Naturalization Service (INS) to support the 68,000 employees of the U.S. Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), and Citizenship and Immigration Services (CIS) organizations. In addition to improving the effectiveness of the department, according to DHS, these consolidation activities are aimed at realizing the efficiencies and economies of scale envisioned by the President and the Congress in creating DHS, by eliminating overlap and redundancies in these processes, systems, and services. The DHS IG reported in December 2004 that while DHS has made notable progress in integrating its many separate components in one department, structural and resource problems continue to inhibit progress in certain support functions. For example, while the department is trying to create integrated and streamlined support service functions, most of the critical support personnel are distributed throughout the various components and are not directly accountable to the management chiefs. We have also identified areas of concern with some of these efforts and have made a number of recommendations to make these support functions more effective and efficient. (See app. II for a list of GAO reports on these individual consolidation efforts.) For example, we reported that DHS intends to acquire and deploy an integrated financial enterprise solution and reports that it has reduced the number of it legacy financial systems. While DHS has established an office within the Management Directorate to manage its financial enterprise solution project, we concluded that the acquisition is in the early stages and continued focus and follow through will be necessary for it to be successful. DHS has issued some guidance to help each management function integrate its portion of the disparate processes and functions inherited when the 22 organizations merged into DHS. According to DHS officials, the following plans and documents were helping to provide overall guidance for these functional integration efforts. Strategic Plan: According to several senior DHS officials, the CFO, CIO, and the staff officer to the Deputy Secretary, the agency’s strategic plan, issued in February 2004, was the primary guidance being used for DHS’s management integration. The DHS strategic plan describes the department’s vision, mission, core values, and guiding principles to achieve its mission of protecting the homeland. In addition, one of its seven strategic goals, organizational excellence, acknowledges the need to integrate the systems, processes, and services the department inherited to improve efficiency and effectiveness. Draft Paper on the 21st Century Department: In April 2004, the Under Secretary for Management also developed a draft 21st century paper to provide more details as to how DHS would achieve its strategic goal of organizational excellence. The draft paper summarizes DHS’s plans for its management integration within three primary areas: (1) human capital, (2) information technology, and (3) business transformation, including the support areas of procurement and acquisition, administrative services, and financial management and budgeting. The draft paper describes key integration initiatives it will take within each key area with short-term milestones, dates, and possible obstacles. For example, the paper discusses DHS plans to implement the Maximizing Results, Rewarding Excellence (MAXHR) initiative, the department’s new human resources management system, and the Electronically Managing Enterprise Resources for Government Efficiency and Effectiveness (eMerge) initiative. The latter uses a consolidated departmentwide solution approach to integrate DHS’s financial and administrative systems, including accounting, acquisition, budgeting, and procurement. Management Directives: At the request of the Secretary and the Deputy Secretary, in October 2004, each of the five DHS management chiefs issued a management directive that, among other things, provides standard definitions of each of their respective roles and responsibilities, as well as a general description of how other directorates and agencies will support them. Specifically, the directives discuss the concept of dual accountability for both mission accomplishment and functional integration as the shared responsibility of the heads of DHS’s individual agencies or components and the management chiefs. Each directive also discusses how the management chief, along with the heads of the directorates, agencies and others, will annually recommend and establish integration milestones for the consolidation of the chief’s function and the development of performance metrics for the respective function. While the documents and plans discussed above are being used to help DHS generally guide its management integration and DHS has made some progress in addressing integration concerns within each functional management area, there still is no overarching, comprehensive plan that clearly identifies the critical links that must occur across these functions, the necessary timing to make these links occur, how these critical interrelationships will occur, and who will drive and manage them. As previously discussed, an agency’s strategic plan does not serve as a tactical or operational integration strategy and does not include the more detailed blueprints, time lines, and resources needed for accomplishing the department’s management integration. The department’s draft paper also does not have a comprehensive linkage across all of its functional initiatives with goals, time lines, and resources needed that would comprise a departmentwide integration strategy. Nor does it lay out how the integration across these functions must be managed. For example, to successfully implement DHS’s human capital system, it must coordinate this implementation with IT modernization. In addition, the majority of the various management chiefs and senior officials we interviewed did not indicate to us that this draft paper was being used as an overarching management integration strategy. Finally, the recently issued management directives can be helpful in guiding individual functional integration efforts, as well as increasing departmentwide accountability for achieving its management integration, but the directives do not serve as a departmentwide integration strategy. Some of the plans and directives already issued by DHS could be used as foundations for building this needed integration strategy. Such a strategy could also help to ensure that the various functional initiatives are prioritized, sequenced, and implemented in a coherent and integrated way, thereby achieving even greater efficiency and cost savings. Based on our prior work on mergers and transformations, as well as results-oriented management, such a comprehensive strategy would involve (1) looking across the initiatives within each of the stove-piped functional units and clearly identifying the critical links that must occur among these initiatives; (2) developing specific departmentwide goals and milestones that would allow DHS to track critical phases and essential activities; (3) identifying tradeoffs and setting priorities; and (4) identifying any potential efficiencies that could be achieved. The institution of a departmentwide management integration strategy could also provide the Congress, DHS’s employees, and other key stakeholders with transparent information on the integration’s goals, needed resources, critical links, cost savings, and status, and a way for these parties to hold DHS accountable for its management integration. DHS’s Business Transformation Office Could Be Strengthened to Serve as a Dedicated Team to Help Set Priorities and Make Strategic Decisions for Management Integration and to Implement the Comprehensive Integration Strategy Our research shows that a dedicated team vested with necessary authority and resources to help set priorities, make timely decisions, and move quickly to implement decisions is critical for a successful transformation. In addition, the team ensures that various change initiatives are sequenced and implemented in a coherent and integrated way. Furthermore, the team monitors and reports on the progress of the integration to top leaders and across the organization, enabling those leaders to make any necessary adjustments. Other networks, including a senior executive council, functional teams, or cross-cutting teams, can be used to help the implementation team manage and coordinate the day-to-day activities of the merger or transformation. The 2003 study commissioned by DHS also recommended that the department should (1) establish a leadership team with implementation responsibility for integration across directorates and be held accountable for departmentwide performance, and (2) create a dedicated program management office responsible for the execution of both mission and management integration efforts. The Under Secretary for Management had acknowledged the need for a dedicated program office to help guide the integration of management functions across the department, but had not created one until October 2004 when funds were appropriated. Specifically, as part of DHS’s fiscal year 2005 appropriation, the conference committee allocated $920,000 for DHS to establish a BTO, which will include a director and four additional staff that will report to the Under Secretary for Management. At the time of our review, DHS was still establishing the office within its Management Directorate and advertising for the director’s position, but had not defined and filled the staff positions. According to the Acting Chief of Staff to the Under Secretary for Management, the department intends that the staff hired for the office will have expertise in program and project management, quality analysis, and performance and data analysis. Based on our discussions with this official, and our analysis of documents describing the role of this office, the purpose of the BTO is to help monitor and look for interdependencies among the department’s discrete management integration efforts. Another purpose of the BTO is to communicate the progress of the functional management initiatives across the department. For example, implementation of eMerge, or for leading and managing the coordination and integration that must occur across functions not only to make these individual initiatives work, but to achieve and sustain overall functional integration at DHS. Without creating a dedicated team to serve in this role, it will be more difficult for DHS to coordinate all integration initiatives across the department and make the tradeoffs necessary to undertake an integration of the magnitude of DHS. As mentioned above, networks, including functional teams, can help the dedicated implementation team ensure that DHS’s efforts are coordinated and integrated. DHS has recently strengthened the role of its functional councils through its management directives to help coordinate integration departmentwide. Early on, each management chief, such as the CIO, CHCO, or CFO established a functional council to address issues pertaining to the relative function. For example, the CFO established a Council that includes component or agency CFOs across DHS and addresses and coordinates departmentwide financial management issues. The other management chiefs established functional councils with similar membership drawn from their relative personnel in each component or agency. Likewise, the Under Secretary for Management has a respective Management Council that discusses issues of departmentwide importance, such as training and development programs, but this council is not dedicated full-time to managing the integration effort across the agency. According to senior DHS officials in the Office of the Under Secretary for Management, the membership of these functional councils had primarily been serving in an information-sharing role for their particular management function across the department. The councils also have been helpful in gaining feedback and buy in from their members on function- specific issues of importance across DHS, as well as providing a way to communicate about these issues. More recently, according to its five management directives, DHS enhanced the role of its functional councils, to include more decision-making responsibilities, rather than just serving in an advisory capacity. In general, the councils are now responsible within each of their individual functional areas for: (1) establishing a strategic plan, (2) balancing priorities on how to best capitalize on the respective management function resources, (3) defining and continuously improving governance structures, processes, and performance, (4) establishing centers of excellence, boards, and working groups tied to relevant council priorities, (5) developing and executing formal communications programs for internal and external stakeholders, and (6) supporting the respective management chief in the design, planning, and implementation of an integration plan for the chief’s individual functional area, among other things. The increased authorities and responsibilities of the functional councils could help DHS further coordinate the integration of each individual function across the department, and the recent establishment of the BTO could also assist DHS in departmentwide integration issues. However, neither the functional councils or the BTO are currently serving as a dedicated team to help manage the department’s management integration. The BTO is well-positioned to serve as a dedicated team, and the role of the office could be strengthened to provide it with the necessary authority and resources to set priorities and make strategic decisions to drive the overall integration strategy. The BTO could also be responsible for leading the development and implementation of the integration strategy as thus described and communicating the progress of the integration to top leaders and DHS stakeholders. Such a dedicated team, as led by a senior leader described below, can provide the focused, day-to-day management needed for successful integration. Continued Monitoring Is Needed to Ensure Senior DHS Leadership Elevates, Integrates, and Institutionalizes Its Management Initiatives We have reported that top leadership clearly and personally involved in the merger or transformation represents stability and provides an identifiable source for employees to rally around during the tumultuous times created by such dramatic reorganizations and transformations as DHS’s merger. Leadership must set the direction, pace, and tone for the transformation and could provide sustained attention over the long term. As we have previously reported, as DHS and other agencies, such as the Department of Defense, embark on large-scale organizational change initiatives to address 21st century challenges, such as national security concerns—there is a compelling need to elevate, integrate, and institutionalize responsibility for key functional management initiatives to help ensure their success. We have reported that creation of a COO or CMO for DHS could help to elevate attention on management issues and transformational change, integrate various key management and transformation efforts, and institutionalize accountability for addressing these issues and leading this change. For example, such an official could provide a single point of contact to manage the integration of functions that operate within their own vertical “stovepipes,” such as information technology, human capital, or financial management, in a comprehensive and ongoing manner. Another potentially important mechanism for such a position is to use clearly-defined, results-oriented performance agreements accompanied by appropriate incentives, rewards, and accountability. To help ensure accountability over the long term, setting a term appointment of not less than 5 years can help provide the continuing focused attention essential to successfully completing multiyear transformations, which can extend beyond the tenure of political leaders. The role of the Under Secretary for Management does contain some of the characteristics of a COO/CMO as we have described, such as integrating key management and transformation efforts by providing a single point of contact as the chief integrator of management functions across DHS. Congress anticipated the difficulty of establishing DHS by creating a Management Directorate as one of the five major organizational units of the new department and vesting responsibilities for the transition and reorganization of the department within the Office of the Under Secretary for Management. According to section 701 of the Homeland Security Act, the Under Secretary is responsible for the management and administration of the Department in such functional areas as budget, accounting, finance, procurement, human resources and personnel, information technology, and communications systems. In addition, the Under Secretary is responsible for the transition and reorganization process, to ensure an efficient and orderly transfer of functions and personnel to the Department, including the development of a transition plan. The Under Secretary also told us that she sees one of her roles as integrating the various management functions across the department. Recent initiatives within the Department could help to strengthen the role and responsibilities of the Under Secretary for Management in leading DHS’s management integration efforts. The management directives, issued in October 2004, are intended to clarify accountability for the integration of the functions across the various directorates. The directives create dual accountability relationships between the department-level functional chiefs and similar chiefs within the agencies and components in the four other directorates. For example, the department CFO within the Management Directorate is accountable for consolidating and integrating financial systems across the department and must work with the multiple CFOs for the various components within the four other directorates and agencies to do so. To help ensure this collaboration occurs, the department CFO has input to the agency and component CFOs’ daily work and annual performance evaluations, according to the directive on financial management, but these CFOs still report to and take direction from their agency or component head. In addition, the recently established BTO could help provide the Under Secretary for Management with a team of resources dedicated to monitoring and assisting with the management integration. It is still too early to tell, however, whether these initiatives will provide the Under Secretary for Management with the elevated authority necessary to integrate functions across the department and institutionalize this new structure, as envisioned for a COO, CMO, or similar position. For example, the indirect authority over component and agency chiefs who are critical to integration, and a BTO that primarily has a monitoring role, may not provide the authority the Under Secretary needs to set priorities for, and make trade-off decisions about resources and investments for integrating these functions. Likewise, without a comprehensive integration plan, the Under Secretary does not have a road map to guide and manage all the players critical to the integration. Furthermore, without additional mechanisms in place to increase accountability and sustainability for achieving the results of the department’s integration, DHS may not be successful in realizing the goals of an improved homeland security function with integrated management support. For example, as mentioned previously, at the time of our review, the then Secretary and Deputy Secretary had announced their intention to leave DHS in early 2005, raising questions about the agency’s ability to provide the consistent and sustained senior leadership necessary to achieve integration over the long term. Without a senior leader with a term limit that extends beyond changes in administration, it may be difficult for DHS to successfully achieve its management integration. The Congress should continue to closely monitor whether additional leadership authorities are needed for the Under Secretary, or whether a revised organizational arrangement is needed to fully capture the roles and responsibilities of a COO/CMO position, such as elevating the position, and including a performance agreement and setting a term limit for it. Conclusions Though national needs suggest a rapid reorganization of homeland security functions, such dramatic transitions of agencies and programs, as well as the breadth and scope of management support functions that need to be incorporated into the new department are likely to take time to achieve. DHS is engaged in a number of individual efforts and initiatives as it works to implement its vision of an integrated, unified department. However, the momentum to create a successful homeland security function generated by the attacks of 9/11 could be lost if DHS does not work quickly to put in place some key merger and transformation practices to be more effective in taking a comprehensive and sustained approach to its management integration. First, without a comprehensive strategy addressing all departmental management integration initiatives, DHS may not be able to establish the critical links, identify tradeoffs, set priorities, and design the efficiencies needed to succeed in integrating the functional management of the department, especially given the long-term fiscal challenges facing the federal government. Some of the guidance and plans DHS has already created could be used as a foundation for building such an integrated strategy. Second, a dedicated implementation team, like the planned BTO, vested with the responsibility and authority, can be used to more actively drive the department’s integration across functions. Finally, Congress could continue to monitor DHS’s management integration efforts and whether the current role of the Under Secretary for Management in driving and sustaining these efforts over the long term is effective or needs to be enhanced by creating a senior leadership position, such as a COO/CMO. Without taking these steps, DHS may have difficulty providing a comprehensive approach and sustaining its long-term management integration efforts. Recommendations for Executive Action In order to build the management infrastructure needed to help support the department’s integration and transformation, we are making two recommendations to the Secretary of Homeland Security. We recommend that the Secretary direct the Under Secretary for Management, working with others, to develop an overarching management integration strategy for the department. Such a strategy would, among other things, (1) look across the initiatives within each of the management functional units; (2) clearly identify the critical links that must occur among these initiatives; (3) identify tradeoffs and set priorities; (4) set implementation goals and a time line to monitor the progress of these initiatives to ensure the necessary links occur when needed; and (5) identify potential efficiencies, and ensure that they are achieved. The department should also use this strategy to clearly communicate a consistent set of goals and the progress achieved internally to all its employees, and externally to key stakeholders, such as the Congress; and designate the planned BTO within DHS’s Management Directorate as the dedicated implementation team for the department’s management integration and provide it with the requisite authority and responsibility to help set priorities and make strategic decisions to drive the integration across all functions. The BTO would also be responsible for helping to develop and implement the overarching management integration strategy. Matters for Congressional Consideration To help ensure accountability and sustainability for DHS’s management integration over the long term, Congress may wish to continue to monitor the following: the progress of DHS’s management integration, for example, by requiring the department to periodically report on the status of its efforts, especially to determine whether it has: implemented a departmentwide integration management strategy; provided the BTO with sufficient authority to serve as a dedicated implementation team to help set priorities and make strategic decisions to drive integration across all functions, and whether the Under Secretary for Management has the authority to elevate attention on management issues and transformational change, integrate various key management and transformation efforts, and institutionalize accountability for addressing these management issues and leading this change. If not, the Congress could reassess whether it needs to statutorily adjust existing positions at DHS, or create a new COO/CMO position, with provisions for a term limit and performance agreement, that has the necessary responsibilities and authorities to more effectively drive the integration. Agency Comments and Our Evaluation In commenting on a draft of this report, DHS generally agreed with the report’s recommendations. DHS also provided additional information on the planned responsibilities and role of the BTO in departmental management integration. For example, DHS stated that the BTO is the dedicated resource for providing guidance for the integration of the department’s management process, such as setting project management standards and establishing standardized processes for monitoring and reporting on the progress of DHS’s integration initiatives. In addition, the department commented that the BTO is establishing an integrated project plan/integration strategy and anticipates it will be released by June 2005. However, at the time of our review, agency officials told us that there was not an integration strategy in place to manage the department’s integration. Based on our work on mergers and transformation practices, we also recommended that DHS provide the BTO with the appropriate authority and responsibilities to help set priorities and make strategic decisions for the department’s integration efforts. DHS agreed with our recommendation and noted that the BTO is to serve as the agent for the Under Secretary for Management whose role is to lead the transition and reorganization of the department. The agency stated that the BTO has been vested with the authorities necessary to ensure an integration strategy is in place and will be used to advise management on decisions about, and direction on, integration. We agree that the BTO is well-positioned to serve as a dedicated integration team, but continue to believe that the role of the office could be strengthened to provide it with the necessary authority and resources to set priorities and make strategic decisions to drive the overall integration strategy. DHS’s more detailed written comments are reprinted in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member of the Subcommittee on the Federal Workforce and Agency Organization, House Committee on Government Reform and to others and made publicly available at no charge on GAO’s Web site at http://www.gao.gov. If you have further questions about this report, please contact me or Sarah Veale at (202) 512-6806 or on larencee@gao.gov or veales@gao.gov. Major contributors to this report included John W. Barkhamer, Jr., Carole Cimitile, Dewi Djunaidy, Masha Pastuhov-Pastein, and Amy W. Rosewarne. Scope and Methodology To identify opportunities for DHS to improve its management integration efforts, we assessed these efforts by using three of the nine key practices consistently found at the center of successful mergers, acquisitions, and transformations. We selected three of these practices as criteria for this review because they are especially important to ensuring that DHS has the management infrastructure it needs at this early juncture in its efforts to sustain the integration of the department. The three selected practices are: ensuring top leadership drives the transformation, setting implementation goals and a time line to build momentum and show progress from day one, and dedicating an implementation team to manage the transformation process. We assessed the extent to which DHS is using these selected practices to support its management integration efforts, i.e., the integration of DHS’s varied management processes, systems, and people—in areas such as information technology, financial management, procurement, human capital, and administrative services. We focused our review primarily on the management integration activities of DHS’s Management Directorate because the Homeland Security Act of 2002 establishes that the Under Secretary for Management is responsible for the transition and reorganization process for the department. However, we limited the scope of our review to the integration of management functions at this time and did not review mission or program integration efforts of the department primarily because GAO has additional work under way on these efforts. We reviewed and analyzed key DHS documents about the department’s management integration, as well as interviewed key senior leaders in the Management Directorate and operational and program leaders from across the department. Key DHS documents that we used for our review include, but were not limited to, memoranda from the then Secretary and Deputy Secretary, the DHS Strategic Plan, various transition and integration planning and policy documents, materials from offices involved with integration efforts, and Departmental Management Directives that addressed the overall approach that each management chief was taking to the integration of its relevant management area. We also asked key senior DHS officials to describe to us DHS’s approach to its management integration, such as whether DHS had a plan for its integration and if a dedicated team was in place to manage the integration. Within the Management Directorate, we met with the Under Secretary for Management, the Chief Procurement Officer, the Chief Financial Officer, the Chief Administrative Officer, the Chief Information Officer, and the Chief Human Capital Officer. Other officials whom we interviewed included chiefs of staff and/or directors of operations for each of the five directorates, and key senior leaders from the Secret Service, the Coast Guard, the Bureau of Citizenship and Immigration Services, the Office of Public Affairs, and the Office for State and Local Government Coordination and Preparedness. We also reviewed published assessments on the organization of DHS and interviewed the authors of these publications to discuss their views on organizational change at DHS. We also examined reports from GAO, DHS’s Inspector General, and others that addressed the integration of departmentwide management functions, such as the development of an integrated departmental financial management system, and information technology, as well as reports that focused on the merger of specific agencies or initiatives within the Department, such as the Federal Protective Service. We conducted our work from April 2004 through February 2005 in accordance with generally accepted government auditing standards. Related GAO Products High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. For more information on the Department of Homeland Security’s major management challenges, see http://www.gao.gov/pas/2005/dhs.htm. Department of Defense: Further Actions Are Needed to Effectively Address Business Management Problems and Overcome Key Business Transformation Challenges. GAO-05-140T. Washington, D.C.: November 18, 2004. Homeland Security: Management Challenges Remain in Transforming Immigration Programs. GAO-05-81. Washington, D.C.: October 14, 2004. Department of Homeland Security: Formidable Information and Technology Management Challenge Requires Institutional Approach. GAO-04-702. Washington, D.C.: August 27, 2004. Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains. GAO-04-777. Washington, D.C.: August 6, 2004. Financial Management: Department of Homeland Security Faces Significant Financial Management Challenges. GAO-04-774. Washington, D.C.: July 19, 2004. Homeland Security: Transformation Strategy Needed to Address Challenges Facing the Federal Protective Service. GAO-04-537. Washington, D.C.: July 14, 2004. Department of Homeland Security: Financial Management Challenges. GAO-04-945T. Washington, D.C.: July 8, 2004. Status of Key Recommendations GAO Has Made to DHS and Its Legacy Agencies. GAO-04-865R. Washington, D.C.: July 2, 2004. Human Capital: DHS Faces Challenges in Implementing Its New Personnel System. GAO-04-790. Washington, D.C.: June 18, 2004. Transfer of Budgetary Resources to the Department of Homeland Security. GAO-04-329R. Washington, D.C.: April 30, 2004. Human Capital: Preliminary Observations on Proposed DHS Human Capital Regulations. GAO-04-479T. Washington, D.C.: February 25, 2004. Human Capital: DHS Personnel System Design Effort Provides for Collaboration and Employee Participation. GAO-03-1099. Washington, D.C.: September 30, 2003. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-715T. Washington, D.C.: May 8, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 2003. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Highlights of a GAO Roundtable: The Chief Operating Officer Concept: A Potential Strategy to Address Federal Governance Challenges. GAO-03-192SP. Washington, D.C.: October 4, 2002. Homeland Security: Critical Design and Implementation Issues. GAO-02-957T. Washington, D.C.: July 17, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1, 1996. DHS Products Major Management Challenges Facing the Department of Homeland Security. OIG-05-06. DHS Office of the Inspector General. Washington, D.C.: December 2004. Review of the Status of Department of Homeland Security Efforts to Address Its Major Management Challenges. OIG-04-21. DHS Office of the Inspector General. Washington, D.C.: March 2004. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
The creation of the Department of Homeland Security (DHS) represents one of the largest reorganizations of government agencies and operations in recent history. Significant management challenges exist for DHS as it merges the multiple management systems and processes from its 22 originating agencies in functional areas such as human capital and information technology. GAO was asked to identify opportunities for DHS to improve its management integration. GAO found that while DHS has made some progress in its management integration efforts, it has the opportunity to better leverage this progress by implementing a comprehensive and sustained approach to its overall integration efforts. GAO assessed DHS's integration efforts to date against three of nine key practices consistently found to be at the center of successful mergers and transformations: setting implementation goals and a time line to build momentum and show progress, dedicating an implementation team to manage the transformation, and ensuring top leadership drives it. While there are other practices critical to successful mergers and transformations--including using the performance management system to define responsibility and assure accountability for change--GAO selected these three key practices because they are significant to building the infrastructure needed for DHS at this early juncture in its management integration efforts. Establishing implementation goals and a time line is critical to ensuring success and could be contained in an overall integration plan for a merger or transformation. DHS has issued guidance and plans to assist its integration efforts, on a function-by-function basis (information technology and human capital, for example); but it does not have a comprehensive strategy, with overall goals and a time line, to guide the management integration departmentwide. GAO's research shows that it is important to dedicate a strong and stable implementation team for the day-to-day management of the transformation. DHS has established a Business Transformation Office (BTO), reporting to the Under Secretary for Management, to help monitor and look for interdependencies among the individual functional integration efforts. However, the role of the BTO could be strengthened so that it has the requisite responsibility and authority to help the Under Secretary set priorities and make strategic decisions for the integration, as well as implement the integration strategy. The current responsibilities of the Under Secretary contain some of the characteristics of a COO/CMO. GAO has reported that such a position could help elevate, integrate, and institutionalize DHS's management initiatives. Recent DHS actions, such as management directives clarifying roles for the integration, can provide the Under Secretary additional support. However, it is still too early to tell whether the Under Secretary will have sufficient authority to direct, and make trade-off decisions for the integration, and institutionalize it departmentwide. The Congress should continue to monitor whether it needs to provide additional leadership authorities to the Under Secretary, or create a new position that more fully captures the roles and responsibilities of a COO/CMO.
Background FPI is a government corporation that is managed by Justice’s BOP. Congress created FPI to serve as a means for managing, training, and rehabilitating inmates. Operating on a nonappropriated-fund basis, FPI employs federal inmates at its factories, which are located within federal correctional institutions, to produce goods and services that are sold for profit to federal agencies. Under the trade name UNICOR, FPI markets about 150 types of products and services that are produced by 5 major product divisions: (1) electronics, (2) furniture, (3) graphics and services, (4) metals, and (5) textiles. At the end of fiscal year 1997, FPI employed over 18,400 inmates at 96 factories nationwide and had sales totaling $513 million. FPI retains earnings from its sales to fund inmate vocational education programs, provide inmate accident compensation, acquire and maintain plant facilities, and maintain a reserve for future uses. Congress recognized that FPI would compete directly with private sector companies for the federal government’s business. However, Congress did not intend for FPI’s activities to impose a hardship on private industry, force private vendors out of business, or have a significant adverse effect on private sector employment. Aspects of the law aimed at achieving these ends mandate that FPI can sell only to federal agencies, and FPI is required, so far as practicable, to produce a diversified line of products so that no single industry will face an undue burden of competition. Also, as a correctional program, FPI is required to train and employ as many inmates as feasible using labor-intensive methods of operation. FPI Product Prices Should Not Exceed Current Market Prices FPI’s enabling statute (18 U.S.C. 4124) requires federal departments, agencies, and government institutions to purchase products from FPI, at prices not to exceed current market prices, if FPI’s products meet their requirements and are available. FPI’s mandatory source status is discussed in FAR Subpart 8.6 relating to acquisition from FPI, which states that federal agencies shall purchase required products from FPI at prices not to exceed current market prices. However, neither FPI’s legislation nor the FAR defines current market price or specifies how such a price is to be established. The FAR also encourages federal agencies to purchase services from FPI to the maximum extent practicable, but FPI is not a mandatory source for services. Over the years, supporters and critics of FPI have debated FPI’s mandatory source status and whether FPI provides products at a fair and reasonable price. In fact, even before FPI was established as a government corporation in 1934, federal agencies were required to buy products from prison industries, and there were challenges to the prices charged by prison industries. For example, our research found that in December 1930, the War Department challenged the prison industries’ price for brushes because it was considerably higher than the prices the Department had previously paid private vendors. Responding to this pricing dispute, the Board of Arbitration for Prison Industries issued a decision in February 1931 that prison industries did not have to set its price at the lowest bid price in order to comply with the current market price requirement. In our 1985 report, we relied upon the Board’s 1931 decision and the corresponding Comptroller General’s Decision (11 Comp. Gen. 75, 77 (1931)), which cited the Board’s decision, in concluding that the law and regulations governing FPI do not specify where in the market price range FPI’s prices should fall. The report concluded that the only limit the law imposes on FPI’s price is that it may not exceed the upper end of the current market price range. A more sweeping opinion on FPI’s status and how it may price products sold to federal agencies was issued by Justice in September 1993. In this opinion, Justice found that the mandatory preference granted FPI is an exception to the rules that normally govern the way products are procured by federal agencies, and therefore, procurements from FPI are not covered by the FAR’s standard provisions. The opinion specifically concluded that the provisions of the FAR governing the submission of certified cost or pricing data; the calculation of a reasonable price, other than market price; and the general FAR provisions for resolving pricing disputes do not apply to FPI. Also, the Justice opinion noted that nothing in FPI’s charter, or in the FAR, suggests that governmental entities may ignore the mandatory priority simply because FPI will not accede to all requested contract terms during negotiation. Thus, if FPI does not provide certified pricing data or does not reduce its price to what a federal agency considers a reasonable price, the agency must still abide by the mandatory priority and buy available products from FPI. Finally, the Justice opinion concluded that FPI may use any method that reliably estimates current market prices, subject to dispute by potential customers prior to purchase and arbitration under the applicable law. FPI’s Former Pricing Policy and Procedures Did Not Ensure That Prices Were Within Current Market Price Range FPI’s overall policy is that its products should be sold at prices that (1) will keep the corporation financially self-sufficient and (2) are not in excess of current market prices. Despite recognizing the statutory requirement that its product prices should not exceed current market prices, FPI’s May 1995 policy and procedures that were in effect when we began our review did not adequately define current market price. Nor did the procedures specify how many prices should be checked to constitute a market or how frequently prices should be checked. In addition, FPI’s policy did not require the product divisions to document the market surveys or other methods they used when setting prices for FPI’s products. Consequently, FPI management was not in a good position to ensure that its products were priced in accordance with the statutory pricing standard, current market price. FPI’s procedures did provide general guidance for the product divisions to use when establishing prices for products. The procedures instructed the product divisions to determine current market price in the following ways: When a comparable product is found on GSA’s Federal Supply Schedule, the schedule prices should be used to determine the current market price. When a comparable product is not on GSA’s schedule but is generally available from private vendors, a review of private sector prices (market survey) should be the basis for establishing a range of current market prices. When comparable products cannot be identified or FPI has been the sole provider, current market price should be determined using FPI’s cost to manufacture (including applicable overhead and administrative costs) plus a reasonable profit, as determined by FPI management. The procedures also stated that when federal agencies request waivers to purchase products from other sources, a waiver should not ordinarily be issued when FPI’s price does not exceed current market price. However, according to FPI senior officials, their management philosophy is to operate in a more customer-focused fashion, and, thus, they do not require federal agencies to buy their products simply because FPI has mandatory source status. These officials said that, in practice, when they cannot offer prices comparable to private vendors’ prices, they often grant waivers allowing federal agencies to purchase products from private vendors. However, the decision to grant or deny a waiver remains in FPI’s control. The FAR outlines the circumstances under which waivers are ordinarily available from FPI and informs federal agencies about where to request a waiver. Specifically, FAR Part 8.604(c) states that when a contracting officer believes that FPI’s price exceeds the market price, the matter may be referred to the appropriate product division or to FPI’s headquarters office in Washington, D.C. FAR Part 8.605(b) states that waivers to purchase products from a vendor other than FPI are not normally authorized simply because the vendor offers a lower price. If the FPI product division rejects the contracting officer’s request for a waiver, FPI procedures allow an appeal of the waiver denial to be made to the FPI Ombudsman in Washington, D.C. Disputes regarding price or other matters that cannot be resolved within FPI are subject to binding arbitration by a board consisting of the Attorney General, the Administrator of General Services, and the President or their representatives. FPI officials said that they did not believe the Board of Arbitration had met very often and that they were unaware of any disputes that had been appealed to the board since the 1960s. We were unable to identify any disputes that had been appealed to the board since the 1930s. We did not assess FPI’s waiver process or complete a comprehensive review of the circumstances under which waivers are requested and granted or denied because this was beyond the scope of our work. We did discuss waivers with FPI officials, including the Ombudsman who decides federal agencies’ appeals of waiver requests that were denied by FPI’s product divisions. The Ombudsman said that in fiscal year 1997, FPI received 11,895 waiver requests for an unknown number of products valued at approximately $302 million. FPI approved waiver requests for products valued at about $251 million, or 83 percent of the dollar value of the products for which waivers had been requested. The Ombudsman explained that a single waiver request may include multiple products that federal agencies wish to buy from sources other than FPI. She also said that price disputes do not normally result in a significant number of waiver requests. We queried an FPI database that contains information on waivers and determined that FPI granted or denied waivers on 29,387 products in fiscal year 1997. Waivers for 982, or 3 percent of these products, cited FPI’s price as the reason for the waiver request. Our analysis also showed that FPI approved waivers on 24,304, or about 83 percent, of the products, and 907 of these resulted because FPI could not meet the customers’ price requirements. Thus, FPI data showed that it granted 92 percent of the waivers that were requested based on price. We did not, however, determine the accuracy of the information in FPI’s database. We discussed FPI’s pricing policy and procedures with FPI officials and told them that we were concerned that the policy and procedures in effect at the time of our review had not implemented our 1985 recommendations to define current market price and provide pricing methods that the product divisions could use to help ensure that FPI’s products were priced within current market prices. According to senior FPI officials, FPI’s pricing policy and procedures had been revised since our 1985 recommendations. In fact, FPI’s 1986 policy statement, which was updated in 1991, defined current market price and required that the methods used in setting product prices be documented. However, this policy was rescinded in March 1995, around the time that the pricing policy and procedures that were in effect when we began our review became effective. FPI officials could not explain why this rescission occurred, and they agreed that the policy and procedures that were in effect at the time of our review were vague and needed to be changed. On February 18, 1998, FPI issued a more comprehensive pricing policy and procedures, which included some of the same guidance that was contained in the 1986 and 1991 policy statements. The revised policy and procedures define current market price for FPI products as the price that could be obtained from competitors for the same or equivalent products or services when a contract is awarded. Senior managers in each product division are assigned responsibility for establishing selling prices for FPI’s products and for ensuring that FPI’s prices do not exceed current market prices, as required by federal statute. When setting prices for products, FPI officials are instructed to consider all comparable products except those products that were priced to undercut normal market conditions. Also, each product division is responsible for establishing pricing files that fully document how prices were established and for reviewing, on a biannual basis, all major products to ensure that prices are within current market ranges. Finally, adherence to FPI’s procedures is subject to regular review by BOP’s Program Review Division. We did not evaluate the extent to which FPI’s product divisions are following the revised pricing policy and procedures, but it appears that if fully and effectively implemented, the new policy and procedures should satisfy our 1985 recommendations. FPI did not have a different policy or set of procedures for the product divisions to follow when establishing prices specifically for services. However, FPI officials told us that because FPI is not a mandatory supplier for services, they clearly recognize that to be awarded contracts to provide services to federal agencies, FPI must offer prices that are comparable with those offered by private vendors. According to FPI officials, the final price for a service contract is arrived at through negotiations, and the customers determine who offers the best service at the lowest reasonable cost. We did not determine whether federal agencies generally award contracts to FPI to perform services after competition with private vendors or as the result of noncompetitive negotiations with FPI. About 8 years ago, FPI believed that its mandatory source status included services. However, GSA disagreed and requested a Justice opinion on whether the same mandatory source priority that FPI has for products should be applied to services. In November 1989, Justice issued an opinion that the mandatory source priority given to FPI under 18 U.S.C. 4124 does not apply to services. Many Products We Reviewed Were Priced According to FPI Policy and Procedures FPI’s officials established prices for many of the 20 products we reviewed in accordance with the May 1995 pricing policy and procedures that were in effect at the time the product prices were determined. Specifically, three of the four product divisions we reviewed demonstrated that they followed FPI’s policy and procedures when they set prices for 13 of 20 products reviewed. Our review of the pricing files for three electronic components, seven textile products, and three systems furniture workstations showed that the respective product divisions had sufficiently documented their pricing methodologies to demonstrate that FPI’s pricing policy and procedures had been followed. However, for the seven remaining products—four ergonomic chairs and three pieces of dorm and quarters furniture—there was insufficient documentation to show that the fourth product division—furniture—followed FPI’s pricing policy and procedures when it set prices for these products. We found that in practice, FPI officials sometimes used a combination of the pricing methods outlined in FPI’s procedures. For example, both the textiles and electronics divisions established product prices that were based on a combination of manufacturing cost analysis and price negotiations with the prospective customers. Specifically, for the seven textile products and three electronic products, FPI officials first developed a unit cost estimate, which they said documented the direct and indirect costs associated with making each of the products. In developing the unit cost estimates, FPI officials reviewed previous contract files and databases to obtain current prices for the raw materials that were necessary to manufacture the textile and electronic products. After FPI’s costs to manufacture the products had been determined, senior managers in the textiles and electronics divisions added on profit and thereby established FPI’s initial price quotes. After FPI developed and submitted its price quotes in these cases to DLA, it generally entered into price negotiations with DLA on the textile and electronic products. To illustrate this situation, FPI’s price for one textile product, a fragmentation vest, resulted from extended negotiations with DLA. In September 1996, FPI submitted its initial price quote of $402.30, but DLA rejected this price and recommended contract negotiations with FPI to establish a fair and reasonable price. FPI countered with several offers that DLA also rejected. Then, in December 1996, after several rounds of price negotiations, DLA officials decided that its current market price for the fragmentation vest was $349.40. FPI agreed to this price and entered into a contract with DLA that same month. The senior manager from the metals division told us that FPI’s final selling price for systems furniture is determined only after a detailed analysis of the prospective customer’s needs has been completed. Further, the selling price is sometimes negotiated and may include a discount from FPI’s initially offered price. For example, the Social Security Administration, FPI’s largest customer for systems furniture, is currently receiving a 3-percent discount on all purchases of systems furniture. FPI’s starting point for pricing systems furniture, such as the three workstations included in our review, begins with the price it pays the OEI Division of Krueger International for the component parts necessary to manufacture a specific workstation. According to an FPI official, after determining the cost for parts, FPI adds all direct labor costs and indirect costs associated with making the finished product plus profit to determine FPI’s price for each systems furniture workstation. According to the FPI official, FPI’s price for systems furniture will not exceed the price charged by its vendor for the same product. The official also said that this helps ensure that FPI’s price is within the current market price range. In addition, we reviewed a 1994 FPI market survey of 14 major private sector vendors that competed with FPI to sell systems furniture to federal agencies. This market survey found that FPI’s prices were about halfway between the highest and lowest prices of the 14 private vendors. The senior program manager from FPI’s furniture division told us that when establishing prices for furniture products, he first determines whether comparable products are available on the GSA Federal Supply Schedule or from private vendors. When comparable products are available, he prices FPI’s products within the range of prices charged by private vendors. For the four ergonomic chairs and three pieces of dorm and quarters furniture that we reviewed, the FPI official told us that he and his staff consulted various private sector catalogues and GSA schedules to ensure that FPI’s prices did not exceed the range of prices charged by private vendors for comparable products. However, the officials from FPI’s furniture division did not document the results of these market surveys. Therefore, we could not independently determine that they followed FPI’s pricing policy and procedures when they established the prices for these products. Prices for Most Products Reviewed Were Within Current Market Price Range As previously discussed, FPI’s price for a product does not have to be the lowest price available or even in the lower range of market prices to satisfy the statutory requirement that its price not exceed current market price. The only limit the law imposes on FPI’s price is that it may not exceed the upper end of the current market price range. If FPI’s price did not exceed the highest price offered to the government, we concluded that FPI’s price was within the current market range. Our comparison of FPI’s prices for 20 products with private vendors’ catalogue or actual prices for the same or comparable products showed that for 16 of the products, FPI’s prices were not the highest. Therefore, FPI’s prices for these products were within the current market range. As shown in table 1, FPI’s unit price did not exceed the highest price offered or charged by private vendors for most of the products included in our analysis. These products represent, however, only a small sample of the products sold by FPI in fiscal years 1996 and 1997. Therefore, the results of our price comparisons are not necessarily indicative of the overall extent to which FPI’s prices are within the current market price range. Although FPI’s prices for 16 of the products reviewed did not exceed current market prices, prices for 5 of these products were at the high end of the range of prices offered by private vendors. In other words, federal agencies might have been able to purchase these products at lower prices, if they were not required to purchase FPI products that are priced at the high end of the market range. For example, we compared three configurations of systems furniture workstations manufactured by FPI with comparable workstations manufactured by private vendors listed on GSA’s Federal Supply Schedule. This comparison showed that FPI’s prices were higher than 81 percent of the prices offered by private vendors for comparable workstations and lower than 19 percent of the prices. More specifically, our analysis showed that FPI’s price of $3,686 for the smallest of the workstations was higher than the prices offered by seven private vendors and lower than the prices offered by two vendors. For another workstation configuration, FPI’s price of $5,174 was higher than the prices of eight private vendors and lower than one vendor’s price. FPI’s price for the supervisory workstation was $6,410, which was higher than the prices of seven private vendors and lower than the prices of two vendors. Figure 1 shows a comparison of FPI’s price with the prices offered by private vendors for each of the three workstations we reviewed. In addition, two of the dorm and quarters furniture pieces that we reviewed were at the high end of the range of prices offered by private vendors for comparable products. FPI’s price of $475 for a wardrobe was higher than the prices offered by seven of the nine private vendors that sold comparable wardrobes on GSA’s Federal Supply Schedule. The private vendors’ prices ranged from $344 to $608. Similarly, FPI’s drop-lid desk was priced higher than five of the nine vendors’ comparable products. FPI’s price for the desk was $520 and the vendors’ prices ranged from $396 to $557. Figure 2 shows a comparison of FPI’s prices for the wardrobe and desk with prices offered for comparable products by private vendors. FPI’s unit prices for the four remaining products reviewed—two cable assemblies, a fragmentation vest used by military combat personnel, and a bunk bed—were higher than the catalogue or actual prices charged or offered by the private vendors that we reviewed for the same or comparable products. However, FPI officials provided various reasons why its prices were higher for these products. On the basis of these reasons, the evidence suggests that FPI’s price for one of the cable assemblies was within the current market price range, and prices for the other cable assembly and a fragmentation vest may have been within the current market ranges. For the bunk bed, however, it was not clear whether FPI’s price was within the current market range because FPI and GSA disagreed on whether a higher priced bed was comparable. DLA purchased two cable assemblies, one in 1996 and the other in 1997, in similar quantities from FPI and private vendors. The first cable, which is used on various Department of the Army radios, was purchased in October 1996 from FPI at a unit price of $77.45. Approximately 6 months earlier, DLA had purchased this same cable from a private vendor for $69.90. We discussed this price variance with DLA and FPI officials who said that FPI’s price was within the current market range at the time of contract award. The DLA official explained that the price for the cable had been escalating due to increasing material costs; and, by September 1996, DLA estimated its market price at $76.87. Therefore, FPI’s price quote of $77.45 was accepted because it was within 1 percent of the estimated market price; and it was considerably lower than three other quotes of $111, $285, and $313 that DLA received from private vendors. When these quotes are compared with FPI’s quote, FPI’s price of $77.45 is within the current market range. FPI’s price for the second cable, which is used on Department of the Navy aircraft, was $592.96, but approximately 4 months earlier, DLA had purchased the same cable from a private vendor for $436. We discussed this case with DLA and FPI officials to determine why the price had increased. The DLA official told us that FPI’s price was well below DLA’s estimated market price of $700.60 at the time the contract was awarded. This official also said the price for this cable assembly had increased primarily because of engineering changes to the product, higher material costs, and an increasing demand for a limited supply of cables. FPI officials agreed that increasing material costs contributed to FPI’s higher unit price. On the basis of these factors, FPI’s price for this product may have been within the current market range. The third product, the fragmentation vest, was purchased by DLA in December 1996. According to DLA, FPI initially offered a price of $402.30, but a lower unit price of $349.40 was negotiated for a minimum order of 49,000 vests and a maximum of 60,000 vests. Two months after awarding the contract to FPI, DLA awarded a contract to a private vendor to supply a smaller quantity (29,635) of fragmentation vests at a unit price of $332.88. We discussed this case with FPI and DLA officials who said that FPI’s higher price was justified, and it was within current market price. Their opinion was supported by DLA’s price analysis of FPI’s quote, which concluded that a higher unit price was justified primarily because FPI was supplying more extra-large-sized vests and these vests contained more of the expensive material, Kevlar. The officials explained that the contract with FPI calls for a much larger number and higher proportion of extra-large fragmentation vests than does the contract with the private vendor. Thus, FPI’s price for this product may have been within the current market range. Finally, FPI’s catalogue price of $230 for a single bunk bed was 11 percent more than the highest price and 81 percent more than the lowest price offered by the private vendors that we reviewed. We discussed this price difference with officials from GSA and FPI to obtain their views. An official from GSA’s National Furniture Center confirmed that the nine vendors they had originally identified are the only vendors on GSA’s schedule that sell bunk beds comparable to FPI’s bed. FPI officials disagreed and told us that they had identified an additional bed on GSA’s schedule they believed was comparable to their own, and that the price for this bed was $274. GSA officials told us, however, that they did not consider this bed comparable to FPI’s bed because it was constructed with more expensive materials and thus should cost more than FPI’s bed. Because of the difference of opinion on product comparability, it was not clear whether FPI’s price for the bunk bed was within the current market range. Also, FPI officials said that dorm and quarters furniture is typically sold in sets as opposed to individual pieces of furniture. Therefore, they suggested that we compare FPI’s total or package price for the bed, wardrobe, and desk with the package prices of the private vendors. We made such a comparison and found that FPI’s package price of $1,225 was, in fact, lower than one private vendor’s price of $1,290.61, but FPI’s price was higher than eight private vendors’ prices. In summary, for the 20 products we reviewed, FPI’s prices for 17 products were not the highest offered to the government, and therefore we concluded that the prices were within the current market price ranges; FPI’s prices for 2 products may have been within the current market ranges; and for 1 product, it was not clear whether FPI’s price was within the current market range. Prices for each of the 20 FPI selected products and the prices charged or offered by private vendors for the same or comparable products are provided in appendix III. Conclusions FPI and the private sector produce comparable products that are sold to the federal government. By law, FPI has a procurement preference over the private sector in selling products to federal agencies. Because FPI is a mandatory source supplier, procurements from FPI are exempt from the rules that normally govern the way federal agencies procure products from the private sector, and FPI is exempt from the FAR’s fair and reasonable price standards. Federal agencies must buy FPI products, even if less costly comparable products are available from the private sector, if FPI’s prices do not exceed current market prices. FPI officials say that despite their mandatory source status, they are striving to operate in a more customer-focused manner by negotiating prices with customers and granting waivers when they cannot price FPI’s products within the competitive range offered by private vendors. We noted that in some cases FPI was willing to negotiate prices, and its data showed that it usually granted waivers requested by agencies when FPI believed its prices were too high. However, it is important to recognize that FPI’s willingness to negotiate prices is dependent on its management philosophy and that FPI ultimately controls the waiver approval process. Furthermore, the FAR states that waivers to purchase products from a vendor other than FPI are not normally authorized because the vendor offers a lower price. FPI’s pricing policy and procedures that were in effect when we began our review did not adequately define current market price or prescribe methods by which it could ensure that its product divisions set prices that do not exceed current market prices. Therefore, FPI was not in a good position to ensure that its product prices did not exceed current market prices. However, its February 1998 policy change appears to put FPI in a better position and, if fully and effectively implemented, should satisfy our recommendations made to FPI in 1985. Our analysis of 20 selected products shows that for the most part, FPI officials followed policy and procedures when they set prices for these products. Also, 17 of the 20 FPI products reviewed were priced within the current market range because prices for these products did not exceed the upper end of the range; 2 other products may have been within the current market price range; and it was not clear whether the 20th product was within the range. However, FPI generally did not offer federal agencies the lowest prices for products that they purchased. Therefore, if it were not for FPI’s mandatory source status, customer agencies might have decided to purchase comparable products at less cost. Agency Comments and Our Evaluation BOP’s written comments dated July 22, 1998, stated that it appreciated our recognition that FPI’s new pricing policy, dated February 1998, should better enable FPI to ensure that its product prices do not exceed current market price. BOP also said that it understood our methodological reluctance to project this report’s findings to all FPI products, but it expressed the view that 19 out of 20 products were apparently priced within the current market range and this is significant evidence that FPI has complied with the statute. We do not share BOP’s view that this is significant evidence to conclude whether other FPI product prices are in compliance with the statute. As stated in this report, the 20 products that we reviewed represent only a very small sample of the many products that FPI sells each year. Therefore, the results of our price comparisons are not necessarily indicative, which BOP recognized in its comments, of the overall extent to which FPI has complied with the statute requiring that its product prices not exceed current market prices. Further, this report concludes that 17, not 19, of the 20 products were clearly priced within current market ranges. BOP expressed its opinion that the evidence in this report supports a finding that FPI’s prices for the two cable assemblies and the fragmentation vest were clearly within current market price ranges, rather than “may have been” within the range as we concluded. BOP also objected to our characterization that FPI’s prices for these three products were the highest of the prices we reviewed. We do not agree with BOP that the prices FPI charged DLA for all three of these products were clearly within current market ranges. As stated in this report and discussed with FPI officials, we compared the actual contract prices DLA paid FPI and private vendors for the two cable assemblies and the fragmentation vest. To make these price comparisons, we first had to identify contracts for similar quantities of the three products that DLA awarded to both FPI and private vendors during the same relative time frames. In all three contracts, FPI’s prices were higher than those paid to private vendors for identical products. This finding is clearly stated in this report. We then made additional inquiries with DLA to determine why FPI’s prices where higher than those charged by private vendors and to determine whether FPI’s prices may have been within current market price ranges. For one of the two cable assemblies, DLA officials provided us with several reasons why FPI’s prices were higher and three price quotes they had received from private vendors before FPI was awarded the contract for the cable. Because all of these quotes were higher than FPI’s quote and higher than the actual price paid for the product, we concluded that FPI’s price was within the current market range. However, for the other two products—another cable assembly and the fragmentation vest—we were not provided price quotes from private vendors. Instead, DLA officials provided general explanations as to why FPI’s prices for these two products were higher than those charged by private vendors. Without price quotes accompanying these general explanations, we could not conclude that FPI’s prices for these two products were clearly within current market ranges, but the explanations that they provided were sufficient to conclude that the prices may have been within the current market ranges. BOP also took exception to our price analysis of FPI’s bunk bed and the conclusion that it is unclear whether FPI’s price for the bed was within current market range. BOP contends that the proper conclusion is that FPI’s price was within the current market range because it had identified another comparable bed that was sold through GSA’s schedule at a price higher than FPI’s price. According to BOP, the higher priced bunk bed is comparable to the other beds included in our analysis because it has the same basic functional features. Therefore, BOP believes that we should recognize the $274 price for this bed as being higher than FPI’s price of $230 for a comparable bed. It is important to note that as discussed in appendix I—the objective, scope, and methodology section—a part of our methodology for identifying comparable products was that GSA and FPI officials had to agree on the comparability of the products before we did our price analyses. During the course of doing our work, we provided FPI officials with the results of our preliminary price comparisons and specifically discussed the bunk bed with these officials. At that time, FPI officials told us they had not identified another comparable bed that was priced higher than their own. However, after reviewing our draft report FPI officials provided us with information on the additional bed and told us that they believed that this bed should be included in our analysis. We presented the additional information, including the price of the bed, to GSA. GSA officials told us that they did not consider this bed to be comparable to FPI’s bed because it was constructed with more expensive materials. Because of this difference of opinion on the comparability of the bunk bed, a key variable of our methodology, we believe it is not clear whether FPI’s price for the bed was within current market range. We modified the text in appendix I to further clarify the criteria used in selecting comparable products. Another BOP issue relates to our conclusion that FPI generally did not offer federal agencies the lowest prices for the products that they purchased. Specifically, BOP said that this conclusion implied that federal customers were overcharged because FPI’s prices were not always the lowest. BOP went on to say that both common sense and sound procurement principles refute such a conclusion because customers distinguish between products on many features other than price, such as dependability, past performance, and ease of procurement. BOP further said that many of FPI’s customers score it high on many of these features and consider FPI products a best value. Our conclusion points out that FPI generally did not offer federal customers the lowest prices for the products that they purchased; and, if it were not for FPI’s mandatory source status, customer agencies might have purchased comparable products at less cost. Although we agree with BOP that agencies would want to obtain the best value and might consider features other than price, we do not know whether the agencies that bought the products covered in our review believe that they did obtain the best values from FPI. We do not dispute that some agencies may believe that FPI products provide the best value. However, if lower cost comparable products that provide all the features and quality the customer wants are available from private vendors, federal agencies would likely save money if they were allowed to buy from these vendors. As this report points out, federal agencies are generally required to buy FPI products because FPI is a mandatory source supplier. Thus, federal agencies do not have a choice of buying comparable commercially available products at less cost, unless FPI approves a wavier. Finally, BOP said that our conclusion that FPI’s prices for virtually every product reviewed were within the current market range supports its contention that FPI’s pricing practice remained consistent over the years despite a pricing policy that was not as detailed as the current policy. We believe it is important to reiterate that the 20 products we reviewed represent only a small sample of the products sold by FPI in the 2 years examined, and the results of our price comparisons are not necessarily indicative of the overall extent to which FPI’s product prices are within the current market range for any time frame. Because the results of our work are not generalizable, we cannot comment on BOP’s assertion that FPI’s pricing practices have been consistent over the years, or that all of FPI’s products are priced within the current market ranges. In fact, we concluded that because of FPI’s vague May 1995 pricing policy before it changed the policy in February 1998, FPI was not in a good position to ensure that its product prices did not exceed current market prices. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its issue date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of committees with jurisdiction over BOP; the Attorney General; the Director of BOP; the Chief Operating Officer of FPI; the Director of the Office of Management and Budget; the Administrator for Federal Procurement Policy; and the heads of the customer agencies we contacted. We will also send copies to interested congressional committees and make copies available to others on request. Major contributors to this report are listed in appendix IV. If you have any questions, please contact me on (202) 512-8387. Objectives, Scope, and Methodology Our objectives were to (1) describe the laws and regulations governing how Federal Prison Industries (FPI) is to price its products, (2) describe the policies and procedures FPI uses to ensure that its products are priced in accordance with applicable laws and regulations, (3) determine whether FPI followed these policies and procedures when it set prices for selected products, and (4) compare FPI’s prices with those charged by private vendors for selected products. In doing our work, we primarily performed audit work at FPI’s headquarters in Washington, D.C. We also met with officials and performed audit work at FPI’s Product Support Center located in the federal correctional institution in Englewood, CO, and six FPI factories located in federal correctional institutions in Butner, NC, and Fort Worth, TX. In addition, we interviewed officials and obtained product cost information from the General Services Administration’s (GSA) Federal Supply Service in Arlington, VA, and Fort Worth; the Defense Logistics Agency (DLA) headquarters at Fort Belvoir, VA, and three defense supply centers located in Richmond, VA, Columbus, OH, and Philadelphia, PA; several other federal agencies, including the Federal Aviation Administration, Social Security Administration, United States Postal Service, and Department of Veterans Affairs, that purchased products from FPI; and numerous private vendors that sell products to federal agencies, including Knoll, Inc.; Steelcase Inc.; Haworth, Inc.; Tennessee Apparel Corporation; M.J. Soffe Company; D.J. Manufacturing; Carter Industries; Tennier Industries; Nationwide Glove; Illinois Glove; Knoxville Glove; Hawkeye Glove; Golden Manufacturing; Propper International; American Apparel; E.A. Industries; Caribbean Needle Point; Terry Manufacturing; A.V. Technology; and Electronic Associates. To meet the first and second objectives, we first obtained and reviewed 18 U.S.C. 4124, which requires federal agencies to purchase FPI products at not to exceed current market prices. We also obtained and reviewed applicable sections of the Federal Acquisition Regulation; our decisions and other legal opinions pertaining to FPI’s pricing of its products and services; and our previous reports. We then compared FPI’s May 1995 pricing policies and procedures that were in effect when we began our review with the pricing criteria found in the applicable laws and regulations. Additionally, we discussed FPI’s pricing policy, procedures, and practices with various officials, including the senior program managers of FPI’s five product divisions; managers and cost estimators from the Product Support Center; FPI’s General Counsel; and officials from GSA, DLA, and several other federal agencies. To meet the third objective, we reviewed FPI’s pricing policy and procedures to determine the extent and adequacy of guidance that had been given to the product divisions to use when establishing prices for FPI’s products and services. Specifically, we determined whether FPI management had defined the term current market price and prescribed pricing methods for the product divisions to use that would ensure that FPI’s product prices did not exceed current market prices, as required by law. We also determined whether the product divisions are required to document the market survey or other methods they use in establishing prices for FPI’s products and services. We then reviewed the sample of 20 products we had selected for the price comparisons in objective 4 to determine specifically how these products were priced and whether the product division followed FPI’s policy and procedures when it established prices for these products. For those products for which FPI officials relied upon cost and pricing data in establishing product prices, we completed sufficient analysis to independently validate the age, accuracy, and appropriateness of using these data. We did not, however, verify that FPI considered all direct and indirect costs when it priced products using the cost plus profit method. We compared the pricing policy and procedures that were in effect when we began our review with the revised policy and procedures that FPI issued in February 1998 to determine what changes had been made. To meet the fourth objective, we first judgmentally selected a sample of 20 products, including 3 systems furniture workstations, 3 pieces of dorm and quarters furniture, 4 ergonomic chairs, 7 clothing and textile products, and 3 electronic products. We determined the prices FPI charged or offered for these products by reviewing FPI’s catalogues and sales reports/data for fiscal years 1996 and 1997. We then compared FPI’s prices for these products with the prices charged or offered by private vendors for the same or comparable products. If FPI’s price did not exceed the highest price charged or offered by private vendors, we determined that FPI’s price was within the current market price range. We conducted our price comparisons by reviewing either (1) the federal agency’s purchase/contract files to determine the volume and dollar amount of purchases for specific items or (2) the appropriate catalogues and supply schedules maintained by GSA. We compared the actual prices DLA paid for the electronic and textile products included in our sample. Such price comparisons were possible because we determined that two DLA supply centers had purchased identical products from both FPI and one or more private vendors in similar quantities and in the same relative time frames. Most of the electronic and textile products were manufactured according to military specifications and are not generally available to the public. For those products that are available on GSA’s Federal Supply Schedule (ergonomic chairs, dorm and quarters furniture, and systems furniture), we provided GSA officials with a list of our sample products along with all pertinent details about each product and requested their assistance in identifying comparable products. The list of comparable products identified by GSA was shared and discussed with FPI officials, and a consensus on comparability of the products was achieved before we completed our price comparisons. In addition to having FPI and GSA agree on what were comparable products, the product had to provide the same basic functional features. We understand that there may be numerous features, some of which may add value to a product, that distinguish one product from another. In making price comparisons for these products, we compared FPI’s catalogue or offered prices with the prices offered by private vendors for comparable products that were available from GSA’s Federal Supply Schedules during fiscal years 1996 and 1997. We were unable to compare actual prices paid by federal agencies for the furniture products included in our sample because of difficulties in identifying contracts under which federal agencies had purchased comparable products from both FPI and private vendors in similar quantities and during the same relative time frames. We recognize that the actual prices paid for some products may be less than FPI’s or GSA’s catalogue prices, depending upon the discounts given by the sellers and the negotiating skills of the procuring officials. In selecting the 20 products to be reviewed, we strived to maximize the number of products that (1) generated high dollar sales for FPI in fiscal years 1996 and 1997 and (2) were purchased by federal agencies in similar quantities from both FPI and private vendors. Also, the selected products included items from four of the five major product divisions within FPI. Only FPI’s Graphics and Services Division was excluded from our sample. This exclusion occurred primarily because we could not identify service contracts that federal agencies had awarded to both FPI and private vendors that were similar enough in the types and quantities of services procured to allow fair price comparisons. Further, FPI is not a mandatory source of supply for services; therefore, it must compete with private vendors for contracts to provide services to federal agencies. Most of the products selected for our sample were in the top 50 sales items for FPI during fiscal years 1996 and 1997. We selected systems furniture for review because this product line generated FPI’s highest revenues in both years. From the systems furniture line, we reviewed three typically configured workstations, which include panels, work surfaces, and storage drawers and cabinets. We included the ergonomic chairs and dorm and quarters furniture (a bunk bed, wardrobe, and desk) in our sample because these products were included on FPI’s top 50 sales items in both fiscal years 1996 and 1997. The textile products reviewed were also selected from the list of high dollar sales; and, in addition, DLA had purchased each of these products from both FPI and private vendors. The textile items reviewed included a fragmentation vest, cold weather trousers, leather gloves, physical fitness trunks, battle dress uniform trousers for the military, and two types of coats for the military. In selecting electronic parts for review, we first identified the federal stock classes for two of the highest dollar sales items—cable assemblies and wiring harnesses—from FPI’s Electronics Division. We then requested DLA officials at the Defense Supply Center in Richmond to assist us in identifying electronic products within these two federal stock classes that the Center had purchased from both FPI and private vendors during the same time period and in similar quantities. We requested this assistance from DLA’s Supply Center in Richmond because it is one of FPI’s largest buyers of electronic products. With DLA’s assistance, we identified three items—two cable assemblies and one wiring harness—that were suitable for inclusion in our review. We did our work between July 1997 and June 1998, in accordance with generally accepted government auditing standards. The results of our price analysis and product price comparisons cannot be projected to the universe of FPI’s products. Therefore, the price comparisons cannot be viewed as indicative of the overall extent to which FPI’s prices are within current market prices. On July 22, 1998, we received written comments on a draft of this report from the Director, BOP. BOP’s comments are summarized and discussed at the end of the letter and are reprinted in appendix II. FPI officials also provided oral technical comments, which were considered in preparing the final report. Comments From the Bureau of Prisons Comparison of Prices for Selected FPI Products With Prices of Private Vendors Price comparison (in dollars) Note 1: The vendor products represented here are identical to the FPI products, and most were manufactured according to military specifications. Note 2: Vendors A through D do not represent the same vendors from one product to another. Note 3: Blanks indicate there were no additional contracts with DLA for this product. Note 1: The products represented here are not identical, but they are functionally comparable products. Note 2: Vendors A through M do not represent the same vendors from one furniture category to another. Note 3: “N/A” indicates the vendor does not offer a product on GSA’s Federal Supply Schedule that was comparable to FPI’s product. Note 4: All figures have been rounded to the nearest dollar. Major Contributors to This Report General Government Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Dallas Field Office, Dallas, Texas James Cooksey, Evaluator-in-Charge Dorothy Tejada, Senior Evaluator Hugh Reynolds, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Federal Prison Industries' (FPI) product pricing, focusing on: (1) the laws and regulations governing how FPI is to price its products; (2) the policies and procedures FPI uses to ensure that its products are priced in accordance with the laws and regulations; (3) whether FPI followed these policies and procedures when it set prices for selected products; and (4) a comparison of FPI's prices with those charged by private vendors for selected products. GAO noted that: (1) federal agencies are required by law to purchase FPI products if they are available, meet the agencies' requirements, and do not exceed current market prices; (2) however, neither the law nor the Federal Acquisition Regulation defines current market price or provides guidance on how such a price is to be determined; (3) a 1931 Comptroller General Decision cited a decision by the Board of Arbitration for Prison Industries, which stipulated that FPI is not required to set its prices at the lowest bid price to comply with the current market price requirement; (4) on the basis of these decisions, GAO concluded that the only limitation on FPI's price is that it may not exceed the upper end of the current market price range; (5) a 1993 legal opinion issued by the Department of Justice found that the mandatory preference granted FPI is an exception to the rules that normally govern the way federal agencies procure products; (6) FPI's May 1995 policy and procedures that were in effect when GAO began its review in July 1997 recognized that FPI products were to be sold at prices that did not exceed current market prices; (7) however,the policy and procedures did not specifically define current market price; (8) FPI's procedures did not specify how many prices had to be checked to constitute a market or how frequently prices should be checked; (9) further, FPI policy did not require the product divisions to document the market surveys or other methods that they used when setting prices for FPI's products; (10) however, on February 18, 1998, FPI issued its revised pricing policy and procedures; (11) GAO's analysis of how FPI established prices for 20 selected products showed that for 13 of the products, the divisions followed FPI's pricing policy and procedures that were in effect when GAO began its review; (12) GAO's comparison of FPI's catalogue or actual prices for 20 products with private vendors' prices for the same or comparable products showed that FPI's prices for 16 of the products were not the highest; and (13) however, FPI's prices for 5 of these 16 products were at the higher end of the price range offered by private vendors.
Background There is a long history of problems with federal credit programs. We reported many of the problems, such as poor recordkeeping and cost data, prior to credit reform. For example, we reported in November 1989 that federal agencies’ long-standing deficiencies in financial management systems and accounting procedures had precluded accurate, comprehensive recording and reporting of the full extent of credit losses.Agencies have had perennial problems tracking loan payments due and loan guarantees made in federal budgets. Also, prior to credit reform, the cash-based budget distorted choices between direct loans and loan guarantees. A direct loan initially looked like a grant since the budget included as a cost the face value of a direct loan, ignoring that at least some part of the loan would be repaid. Conversely, loan guarantees looked free when they were made because the budget ignored the fact that some would result in default costs. The Office of Management and Budget (OMB), GAO, the Congressional Budget Office (CBO), and others reported on the need to change the way credit programs were budgeted. In response to these reports and a growing recognition of federal financial management problems, the Congress enacted a series of laws designed to improve financial management and the quality and use of cost data in decision-making. To address the deficiencies in recognizing the cost of credit programs, the Federal Credit Reform Act of 1990 was enacted as part of the Omnibus Budget Reconciliation Act of 1990. Credit reform was intended to ensure that the full cost of credit programs would be reflected in the budget so that the executive branch and the Congress might consider these costs when making budget decisions. Accounting standards for credit programs were developed to be consistent with the intent of this act. To address broader problems in federal financial management, the Chief Financial Officers (CFO) Act of 1990 required the development and maintenance of integrated agency accounting and financial management systems, including financial reporting and internal controls, that provide for development and reporting of cost information. The Government Management Reform Act of 1994 expanded the CFO Act to provide for audits of the annual financial statements of the 24 CFO agencies. The largest credit programs are found in these agencies and audits include a review of agencies’ subsidy estimates and actual loan performance data. Accurate cost information also is key to improvements in the efficiency and effectiveness of federal programs as envisioned by the Government Performance and Results Act of 1993. The Debt Collection Act of 1982 provided for OMB to require agencies to report debt information to OMB and to the Department of the Treasury. Finally, the Debt Collection Improvement Act of 1996 expanded collection tools and authorities available to agencies and called for centralized servicing of some debt. Federal financial management, including credit program management, continues to reap the benefits of these laws. While all of these laws sought to improve federal financial management, a major change for budgeting was the Federal Credit Reform Act included in the Omnibus Budget and Reconciliation Act of 1990. This act changed the budgetary treatment of credit programs so that their costs could be compared more appropriately both with each other and with other federal spending. Credit reform requires agencies to estimate the net cost to the government over the full term of the credit of new direct or guaranteed loans to be made in the budget year and to record that cost in the budget on a present-value basis. Unless OMB approves an alternative proposal, agencies are required to reestimate this cost annually as long as any loans in the cohort are outstanding. The Balanced Budget Act of 1997 amended the Federal Credit Reform Act to simplify and clarify subsidy estimation requirements. OMB also has simplified guidance for credit subsidy estimation. Appendix I contains a more detailed description of credit reform requirements and recent changes. Credit programs may be either discretionary or mandatory as defined in the Budget Enforcement Act of 1990. Appropriations for the subsidy cost of discretionary credit programs are counted under the discretionary spending caps and so must compete with other discretionary programs for the funding available under these limits. Like other mandatory programs, mandatory credit programs receive automatic appropriations for whatever amount of credit is needed to meet the estimated demand for services by beneficiaries. All credit programs automatically receive any additional budget authority that may be needed to fund reestimates. For discretionary programs this means there is a difference in the budget treatment of original subsidy cost estimates and of subsidy cost reestimates. The original estimated subsidy cost is counted under the discretionary caps, but any additional appropriation for upward reestimates of subsidy cost is exempt from the caps. This design could result in a tendency to underestimate the initial subsidy costs of a discretionary program. Portraying a loan program as less costly than it really is when competing for funds under the discretionary caps means more or larger loans or loan guarantees could be made with a given appropriation since the program then could rely on automatic funding for subsequent reestimates to cover any shortfalls. This built-in incentive is one reason to monitor subsidy reestimates. Monitoring reestimates is a key control over tendencies to underestimate costs as well as a barometer of the quality of agencies’ estimation processes. The development of credit reform requirements reflects in part decisionmakers’ interest in analyzing the causes of changes in subsidy estimates. Understanding which of the components of subsidy expense—interest, net defaults, fees and other collections, and other subsidy costs—are the key drivers of reestimates can both improve the quality of estimates and yield insights into program operations. OMB developed and provided agencies with a computer model to calculate the total estimated subsidy rate and the components of subsidy expense based on agency-developed cash flow information. In the development of accounting standards for credit programs, the Federal Accounting Standards Advisory Board (FASAB) indicated that these data would be valuable for making credit policy decisions, monitoring portfolio quality, and improving credit performance. Current accounting standards and OMB guidance require agencies to recognize, and disclose in the financial statement footnotes, the four components separately for the fiscal year during which direct or guaranteed loans are disbursed. However, for programs that disburse over more than 1 year, the current disclosure aggregates subsidy component data for the current year with subsidy costs from prior years. In addition, changes in law and program administration often occur. Thus, loans disbursed from programs over multiple years have different program characteristics and the current year’s financial statement disclosures do not represent the program characteristics or expenses of any given year of the program. Because the requirement in its present form does not provide the kind of useful information that was intended, FASAB now is considering revising these standards. Agencies now have prepared eight budgets under credit reform requirements and there should be 6 years of actual data available. Because of different program requirements, resource and expertise levels, and levels of commitment and interest, agencies have taken different approaches to making subsidy estimates. Preparing subsidy estimates is complex for a number of reasons, including projecting cash flows many years into the future and assessing the effect of economic changes on a particular program and its borrowers. Further, in some—if not all—agencies, budget office staff must integrate information from staff and systems in both the finance and program offices. While the present value-based budgeting of credit reform is a major step forward, its success depends heavily on the quality of these complex subsidy estimates. The independent review of agency records and data in the annual financial audit is an important step in monitoring the subsidy estimates and improving their reliability. When credit reform was enacted, it generally was recognized that agencies did not have the capacity to implement fully the needed changes in their accounting systems in the short run and that the transition to budgeting and accounting on a present-value basis would be difficult. However, policymakers expected that once agencies established a systematic approach to subsidy estimation based on auditable assumptions, present value-based budgeting for credit would provide them with significantly better information than the former cash-based system. Despite the difficulties with implementation, including current data problems, present value-based reporting for credit avoids a number of the problems of cash reporting. Therefore, we believe that making credit reform work is important. Objectives, Scope, and Methodology The objectives of our work were to determine (1) whether agencies completed estimates and reestimates of subsidy costs, (2) whether we could readily identify any trends including improvements in subsidy estimates as reported by the agencies, and (3) whether we could readily identify the causes for changes in subsidy estimates. You also asked us whether agencies with discretionary credit programs initially underestimated credit subsidy costs in response to the incentive created by the availability of permanent, indefinite budget authority for credit reestimates. We selected a sample of 10 programs from the five agencies with the largest domestic federal credit programs: the Departments of Agriculture, Education, Housing and Urban Development, and Veterans Affairs, and the Small Business Administration. We generally selected programs that have the most credit outstanding or highest loan levels. Both direct loan and loan guarantee programs are represented. Table 1 in the following discussion of the availability of subsidy estimates and supporting documentation contains a list of the 10 programs we examined. We requested that agencies provide budget data and information for the selected programs for fiscal years 1992 through 1998. The data requested included (1) descriptions of the credit program and highlights of program changes over the years, (2) spreadsheets showing estimated or reestimated cash flows of each cohort, (3) input to and output from OMB’s credit subsidy model, and (4) documentation of agency efforts to revise the subsidy estimation process. For each cohort in fiscal years 1992 through 1998, we extracted the subsidy rate estimates used in the President’s budget request, budget execution, and all reestimates. These data are included in appendix III. Our work reports the subsidy rate data and documentation as provided by the agencies. We interviewed staff who prepared the subsidy estimates and obtained written confirmation from each agency that the data in the tables in appendix III were accurate and represented all of the data the agency had. However, we found problems with these data. While we did not independently verify the accuracy of these data, we did compare the budget request subsidy rates confirmed by the agencies to the rates reported in the appropriate Budget Appendix and Budget Credit Supplement. We found that agency-confirmed rates differed from the Budget in nine instances although only three differences were greater than half a percentage point. In two of these three instances, the agencies later provided documentation to support the rates in the Budget. In the third instance, we used the rate produced by the OMB subsidy model because the agency’s cash flow spreadsheets best supported it. In addition, we reviewed recent financial statement audit reports for these credit agencies and programs as one gauge of the reliability of these data. While we examined data for all 10 programs in the 5 credit agencies, specific examples used in our work discuss only those programs or agencies with comparable data. For example, we had comparable data from only seven programs to use in our analysis of the most recent subsidy estimates for the fiscal year 1992 and fiscal year 1998 cohorts. This is because all of VA’s credit programs (including some not examined in this report) were consolidated into two programs for fiscal year 1998 (a direct loan program and a loan guarantee program) and Education’s direct loan program only began in fiscal year 1994. We used the data in appendix III to try to identify trends in subsidy estimates. Appendix II includes graphs of total subsidy rates by cohort for nine programs and graphs profiling the subsidy rates of a given program cohort over time for eight programs. We did not prepare either graph for SBA’s Disaster Loan Program because SBA had not included subsidy reestimates for any cohort in the President’s Budget prior to fiscal year 1999. We also did not profile a cohort of USDA’s Farm Operating Loans because we did not have enough comparable data for our review. To further understand the causes for changes in subsidy rates, we then analyzed the four components of subsidy expense (interest, net defaults, fees and other collections, and other subsidy costs) required to be reported by SFFAS No. 2 and calculated by OMB’s subsidy model. We also compared the budget execution estimate to the first reestimate for all credit programs and analyzed whether there was a different pattern in the direction of the reestimates for direct loan programs and loan guarantee programs or for mandatory and discretionary programs. Our work was conducted in Washington, D.C., from September 1996 through January 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the following officials or their designees: the Director of the Office of Management and Budget, the Secretary of Agriculture, the Secretary of Education, the Secretary of Housing and Urban Development, the Administrator of the Small Business Administration, and the Acting Secretary of Veterans Affairs. All of the entities provided written comments, which are discussed in the “Agency Comments and Our Evaluation” section and reprinted in appendixes IV through IX. Problems Persist With Agencies’ Estimates of Subsidy Cost In each of the agencies in our study, we found problems with the availability of estimated subsidy rates and supporting documentation and with the reliability of the subsidy rate estimates. In 8 of the 10 programs we examined, agency staff either failed to do reestimates for certain years or completed them too late to be included in the budget formulation or audit cycles. While some progress has been made at some agencies, audits of financial statements continue to show that serious problems remain. Effective implementation of credit reform is highly dependent on the availability of accurate data. Subsidy Rate Estimates and Supporting Documentation Were Not Consistently Available Agencies did not have for our review all of the estimated subsidy rates and supporting documentation for the seven budgets they prepared under credit reform for fiscal years 1992 through 1998. Also, the availability of subsidy rate estimates and reestimates generally did not improve over time. All nine programs in existence in 1992 when credit reform became effective had for our review budget request and budget execution subsidy estimates for the first 2 years under credit reform—fiscal years 1992 and 1993. The first reestimates of subsidy cost should have been done for the fiscal year 1994 Budget. Of the 10 programs we examined, 8 either failed to do reestimates for certain years or produced them too late to be included in budget formulation or audit cycles. Only five of the nine programs that should have completed reestimates for the fiscal years 1994 and 1995 Budgets had them for our review. Starting with the fiscal year 1997 Budget, OMB Circular A-11 provided that agencies could forgo completing reestimates under certain circumstances. For fiscal years 1997 and 1998, 5 of the 10 programs made timely reestimates. One of the programs, SBA’s Disaster Loans Program, did not have subsidy reestimates in the Budget until the recently released fiscal year 1999 Budget. The programs that did not make reestimates generally reported staff constraints in completing reestimates while preparing their budget request or difficulties obtaining data on which to base reestimates. All reported that they had received waivers from OMB. We received written copies of OMB waivers to USDA and HUD permitting late completion of estimates. While USDA’s waiver was effective for fiscal year 1996 and future budgets, HUD’s waiver was only for fiscal year 1997. HUD staff told us that OMB gave them waivers orally for other years, and OMB did not disagree. SBA also told us that the waivers were approved by OMB orally, and OMB agreed. On the other hand, for estimates and reestimates that were made, the availability of documentation has improved somewhat. In fiscal years 1996 through 1998, most programs did have documentation for estimates and reestimates that were made. Recent credit reform guidance explicitly requires agencies to document subsidy estimation. In the early years of credit reform, fiscal years 1992 through 1995, most programs did not have for our review supporting documentation for all completed budget estimates and reestimates. An official at Education characterized early credit program files as “woefully lacking.” HUD staff said that early credit estimates were prepared by OMB staff and that HUD and OMB do not have supporting documentation. Table 1 shows, by program and fiscal year, whether agencies had for our review all subsidy estimates for the budget request and budget execution, made subsidy reestimates, and had supporting documentation for these estimates that we requested for our review. A check indicates that the agency had for our review all estimated rates, rates for reestimates they made, and had all supporting documentation contained in output from the OMB subsidy model and agency cash flow spreadsheets. It says nothing about the quality of the data. We discuss in some detail our concerns about data reliability later in this report. Reliability of Some Subsidy Estimate Data Is Questionable The reliability of credit data is questionable for a number of reasons including (1) the poor results of financial statement audits, (2) discrepancies we found between subsidy rates reported in the President’s Budget and the data confirmed to us by the agencies, (3) subsidy rate estimates not always supported by documentation, (4) acknowledgements from some staff that component data were questionable, and (5) staff reports of difficulties with systems support. First, financial statement audits raised questions about data reliability. Three of the largest credit agencies, HUD, USDA, and Education, received disclaimers or qualified opinions on their fiscal year 1996 financial statement audits, in part, because of problems associated with their credit programs. HUD received a qualified opinion because the Federal Housing Administration’s (FHA) credit-related accounts were not reported on the present value basis required by SFFAS No. 2. Consequently, HUD’s Office of the Inspector General (OIG) was unable to audit the credit-related account balances. USDA’s OIG gave a qualified audit opinion on the fiscal year 1996 financial statements of the rural development mission area because it was unable to obtain sufficient support for credit program receivables and estimated losses on loan guarantees and the related credit reform program subsidy and appropriated capital used. USDA’s Farm Service Agency Farm Operating Loans were included as part of the consolidated audit of USDA’s fiscal year 1996 financial statements which received a disclaimer of opinion. The Farm Service Agency does not prepare separate financial statements. Education received a disclaimer of opinion on the department’s fiscal year 1996 financial statements because it was unable to support the reasonableness of the amounts shown for loans receivable and liabilities for loan guarantees. Education’s OIG also was unable to render an audit opinion due to auditor concerns about the quality of the data supporting subsidy estimates of the Federal Family Education Loan Program. This could reflect problems with historical data since agencies with loan guarantee programs rely on lenders or intermediaries for loan performance data. In some cases, the auditors recommended that agencies establish and document a process for the development of subsidy estimates and make reestimation of subsidy costs a priority. Although VA and SBA both received unqualified audit opinions on their fiscal year 1996 financial statements, the auditors of these agencies reported internal control weaknesses related to estimating credit subsidies. For example, SBA’s auditors reported that the agency used inconsistent and unreliable data to reestimate the Disaster Loan Program. Because of the questionable data, OMB and SBA had not included any reestimate of this program until the fiscal year 1999 Budget or the fiscal year 1997 financial statements. In addition, VA’s auditors reported that the agency does not efficiently and reliably accumulate the financial information needed to comply with credit reform accounting requirements and that significant credit reform-related adjustments were necessary to the financial statements. Second, we found discrepancies between the subsidy rates reported in the President’s Budget and the data provided and confirmed to us by the agencies in about 10 percent of our sample (7 of 68 rates). Some agencies agreed, after we pointed out inconsistencies, that certain data they had provided and certified as accurate in fact were not. Third, agencies also had for our review subsidy rate estimates that were not always supported by the documentation. For example, none of Education’s documentation supported its estimated subsidy rates. Also, agencies, such as HUD, had difficulty identifying the fiscal year of available subsidy rates and documentation and whether the rates were budget execution estimates or reestimates. Fourth, staff from seven of the eight programs whose component data we were able to examine acknowledged that the data were questionable. We found that component data were questionable because they were not consistent with program characteristics. For example, VA’s direct loan program showed the net default component as negative because cash flows from loan sales were included with recoveries from defaulted loans. As a result, recoveries exceeded defaults. To avoid erroneously indicating that higher defaults would reduce the subsidy rate, proceeds from loan sales should have been included with the “other cash flow” component. Fifth, and finally, data reliability depends in part on having adequate information systems, and effective top management commitment is vital to ensuring that these are provided. Today, as 4 years ago, staff in most agencies we examined report difficulties with systems support. For example, staff in three of the five agencies we reviewed—HUD, VA, and USDA—reported inadequate actual data on loan performance and computer systems support. These agencies have efforts underway to refine their data and/or improve their estimation processes. Staff at USDA and VA have worked with their offices of the inspector general or OMB to refine their cash flow spreadsheets and reestimate calculations. HUD staff reported efforts underway to develop a new integrated accounting system and a need to re-engineer budget and accounting processes. Staff in the other two agencies—Education and SBA—told us that systems support was not an issue and reported that credit reform implementation has become a high-level priority for their agencies. Specifically, SBA administrators had a commitment to meet the requirements of the Federal Credit Reform Act and sought to improve their capacity to make better subsidy cost estimates. Both agencies reported that they had developed new computer systems, significantly refined their historical information stores, and are using contractor support. Although upper-management commitment is necessary, it does not result in instant improvement. Audits continue to report problems at both agencies, both have difficulty obtaining historical information, and SBA and others have identified errors in SBA subsidy estimates. OMB and credit agency staff acknowledge that budget and financial systems for credit programs could have problems with conversions to the year 2000. In a February 1998 report on agencies’ progress in addressing Year 2000 conversion, OMB reported that SBA and VA were demonstrating sufficient progress, USDA and HUD were making some progress but OMB still had concerns, and Education was making insufficient progress. In testimony in September 1997, we reported that the Veterans Benefits Administration, where its housing credit programs are located, has developed an agencywide plan and created a program management organization but will need to strengthen management and oversight of Year 2000-related activities to avert serious disruption of its ability to disseminate benefits. Since our testimony, VA has taken action to address some of our concerns. Subsidy Rates Generally Fluctuated Over time, some fluctuation in subsidy rates would be expected within a given group of loans or guarantees (a cohort) and among different cohorts of the same program. Reasons include loans or guarantees made at different interest rates than anticipated; programmatic redesign; better information on technical factors such as defaults, prepayments, and fees from more actual experience; or unanticipated changes in the economy including interest rate changes. Agencies have had several years to obtain and refine historical data and estimation methodologies. Over time we would expect to find that, for a given cohort, the annual changes in reestimates due to technical factors would be smaller. We believe that agencies can improve their abilities to forecast such factors as defaults, recoveries, prepayments, and fee revenue through better modeling and more and better historical data. As this improvement occurs, the variability in the subsidy rate from year to year for a given cohort caused by these factors (as opposed to economic factors such as interest) should diminish. Because reliable component data were not available, we could not readily determine whether this had occurred. However, total subsidy estimates within a given cohort often varied widely over time. Moreover, estimates for different cohorts within the same program also differed widely. These sharp variations raise questions about the causes for these changes and the reliability of the underlying data. The success of credit reform budgeting relies on reasonable estimates informed by timely, appropriate actual data. We analyzed the data from several perspectives: Comparing Estimated Subsidy Rates Over Time For a Given Cohort—In the programs we examined, we found that reestimates often were large and changed direction over time—increasing or decreasing estimated subsidy cost. During the period of time between a given cohort’s budget execution estimate and its most recent reestimate, we found that reestimates generally fluctuated both up and down. A total of 74 percent of the 23 cohorts we analyzed had at least one reestimate increasing the subsidy rate and at least one reestimate that decreased the subsidy rate. For example, estimates and reestimates of the fiscal year 1992 cohort of Education’s FFELP program changed direction each of the 6 years of data, estimated subsidy rates ranged from 26.30 percent to 16.99 percent. This could result from any number of factors, including changes in assumptions about cash flows as agencies gained experience in estimating subsidy rates or developed better actual loan data, and changes in the timing of loan activity or interest rates. Only VA’s Loan Guaranty Direct Loan Financing Account did not have both increases and decreases in its subsidy estimate, as shown in figure II.17. Graphs of selected cohorts are included in appendix II. Viewed another way, we compared two specific subsidy rate estimates—budget execution and the first reestimate—in eight programs for which we had appropriate data (27 cohorts). We found that the first reestimates of the subsidy rates were higher than the budget execution rates for 13 cohorts and were lower for 14 cohorts. We also found that 15 of the 27 cohorts had changes of at least 20 percent—7 increases and 8 decreases. One cohort in the SBA 7(a) program increased by 126 percent. Comparing Estimated Subsidy Rates for Different Cohorts of a Given Program—We also analyzed the estimated subsidy rates for a given program by comparing the most recent estimates or reestimates for different cohorts. For example, as shown in figure II.16, the subsidy estimates and reestimates of the fiscal years 1992 through 1997 cohorts in VA’s Loan Guaranty Direct Loan Financing Account changed direction in each of the 6 years of data, and estimated subsidy rates ranged from 4.80 percent to 1.18 percent. We found that the fiscal year 1998 President’s Budget showed that six of eight programs had a lower estimated subsidy rate for their new fiscal year 1998 credit than they reestimated for their fiscal year 1992 credit, the oldest cohorts in our study. Only HUD’s Federal Housing Administration’s Mutual Mortgage Insurance Fund and VA’s Loan Guaranty Direct Loan Financing Account had estimated subsidy rates for fiscal year 1998 credit that were higher than the reestimates of the fiscal year 1992 cohort. This relatively consistent pattern of lower estimated subsidy rates in fiscal year 1998 may reflect changes in economic conditions such as lower interest rates, data errors, and/or changes by agencies designed to lower the subsidy cost such as increasing fees, reducing the share of the loan receiving the government guarantee, and improving debt collection. To determine the cause of specific subsidy rate differences would require examining the detailed assumptions used to estimate a program’s cash flows over the full term of the credit. Graphs of the most recently estimated subsidy rates for all cohorts in 9 of the 10 programs in our sample are included in appendix II. SBA’s Disaster Loan Program is not included because the agency had not reestimated the program’s subsidy costs at the time of our review. Comparison of Estimated Subsidy Rates for Direct Loans to Those for Loan Guarantees—Loan type was not a predictor of whether subsidy rates increased or decreased from the budget execution rate to the first reestimate. For example, of the three direct loan programs and five loan guarantee programs for which we had appropriate data, only one program—VA’s Guaranty and Indemnity Fund guarantee program—did not have at least one cohort with an upward reestimate and at least one cohort with a downward reestimate. Available Data Not Sufficient to Assess Whether Budgetary Treatment Affected Initial Subsidy Estimates To obtain some insight on the potential effect of credit reform’s automatic appropriation (unconstrained by discretionary spending limits) for reestimates, we compared the budget execution estimate to the first reestimate for mandatory programs and for discretionary programs. As explained previously, initial appropriations for discretionary programs must compete with other programs for the specified amount of funding available under the discretionary spending limits set in law. Mandatory credit programs are automatically funded for whatever amount of credit is needed for a given program design and set of program beneficiaries. Both discretionary and mandatory credit programs automatically receive funding for the cost of reestimates without regard to Budget Enforcement Act limits. Thus, agencies with discretionary credit programs could benefit from initially underestimating subsidy rates. If the pattern in the direction of reestimates for discretionary and mandatory programs were the same, it would be an indication that this provision of law was not affecting original estimates. It may be difficult to determine whether agencies intentionally underestimated subsidy costs in initial estimates given data unreliability and the number of other factors (such as changes in interest rates or other economic conditions) that could affect subsidy estimates and reestimates. We do know of one instance in which the issue was raised. SBA, an agency with discretionary credit programs, hired Price Waterhouse to conduct a diagnostic review of SBA’s existing internal controls. This September 1997 study said that “the credit subsidy process is not viewed as a way of assessing the future risk and costs of the program for management purposes. Rather, the rate calculation is perceived [by SBA] to be a tool for gaming the congressional appropriations process.” In commenting on a draft of this report, SBA officials disagreed with this conclusion. SBA officials stated that the Price Waterhouse report was incorrect and, due to its special nature, it was not corrected. In support of their position, SBA officials cited the quality of their data and staff and SBA’s commitment to have accurate and credible subsidy rates. However, we found, as discussed earlier, an error in SBA’s subsidy estimation methodology and that their component data were not always correct. Also, when we discussed SBA’s concerns with a Price Waterhouse staff member, he stated that its report was accurate for the period it covered, spring 1997. He described its report as based on interviews and a review of documentation. The available data we were able to obtain were not sufficient to assess whether a credit program’s budgetary treatment affected its initial subsidy estimates. We found somewhat similar patterns when we compared discretionary and mandatory programs. We found that the estimated subsidy rates for 8 of the 15 discretionary cohorts increased in the first reestimate following the initial appropriation, while first reestimates for 7 of the 12 mandatory cohorts decreased. This result is not conclusive. No real conclusions can be drawn from this observation about whether some discretionary programs may have sought to benefit from initially underestimating subsidy costs. Any firm conclusion about the reasons for reestimates would require better data and more in-depth study. Other factors, such as changes in the economy—including interest rates—or more historical data, may have contributed to these reestimates. Further, as audits have demonstrated, much of the data are not reliable. Also, sensitivity analyses and other sources showing the key variables that affect subsidy rates were not consistently available. Lack of Reliable Component Data Hampers Ability to Determine the Causes of Changes in Subsidy Estimates Data on the four components of subsidy expense—interest costs, net defaults, fees and other collections, and other subsidy costs—could be used to examine the causes for changes in subsidy rate estimates. Ideally, these data, calculated by the OMB subsidy model as part of the subsidy estimation process, would provide a ready basis to analyze such changes and thus identify possible policy responses. However, these component data were frequently missing or inaccurate, and thus we were unable to use them for identifying causes of changes in estimates. SFFAS No. 2 provides general definitions of these components and requires agencies to disclose them in financial statements. The Federal Accounting Standards Advisory Board commented on the importance of such data stating that “the cost component information would be valuable for making credit policy decisions, monitoring portfolio quality, and improving credit performance. Information on interest subsidies and fees would help in making decisions on setting interest rates and fee levels. Information on default costs would help in evaluating credit performance.” It also could be useful as a performance measure to comply with the Government Performance and Results Act. With better data, decisionmakers could compare these components across programs and agencies to see the effect of programmatic differences. The potential usefulness of the component data was recently demonstrated by our analysis of SBA component data. Although SBA’s subsidy component data were flawed, they still provided a quick indication to GAO that there was an error in the fiscal year 1997 subsidy estimates for the section 7(a) General Business Loan Program. Correcting this error enabled SBA to guarantee approximately $2.5 billion more in section 7(a) small business loans. OMB and SBA officials acknowledged that better oversight and improved internal controls at both OMB and SBA are needed to prevent similar errors in the future. A model developed by OMB staff to calculate credit subsidies aggregates detailed data on defaults, recoveries, prepayments, and other cash flows to calculate the components of subsidy expense on a present-value basis. In describing the OMB credit subsidy model, OMB guidance to agencies says “use of a common subsidy model ensures comparability and uniformity among all Federal credit program subsidy estimates.” However, VA, USDA, SBA, and HUD did not distribute their cash flows consistently among the components. OMB Circulars A-11 and A-34 did not provide definitions of the subsidy components. The user’s guide for OMB’s subsidy model did not provide sufficiently clear definitions of the components to ensure that the components could be calculated accurately. Agency staff said they did not have a clear understanding of the definitions and thus were unsure about where the OMB model allocated detailed data for each of their program’s cash flows in calculating the subsidy expense components. Agency staff were unclear about what the data produced by the model represented. In an earlier report reviewing the credit subsidy model, we recommended that OMB revise the model to specifically identify which data were used by the model in the subsidy calculations. Further, OMB did not require agencies to use the subsidy model at a specific point in their estimation process as they are developing subsidy estimates for their budget submissions. Education officials noted that, with the participation and approval of OMB, Education uses the model as an interim step in subsidy estimation, not at the end as do other agencies. Education uses a methodology that makes minor adjustments to subsidy rates produced by the model. As a result, Education staff did not provide component data for its reestimated subsidy rates. Clear definitions of each subsidy component and specific OMB instructions and assistance to agencies in using its subsidy model could provide better assurance of accurate and comparable component data. Since our inquiries, staff at VA, USDA’s Rural Housing Service (RHS), and HUD worked with OMB staff to clarify requirements and address problems with component data. Further, staff from five of eight programs whose component data we were able to examine acknowledged that the data were questionable. We found that component data were inaccurate because they were not consistent with program characteristics. RHS staff said that, before fiscal year 1996, the component data were incorrect because they adjusted underlying data in using early versions of the OMB subsidy model that did not accommodate their program characteristics. As discussed earlier, VA’s direct loan program showed the net default component as negative in each of the 7 years that data were available. This erroneously indicates that higher defaults would reduce the subsidy rate. Component data provided by the agencies are shown in the tables in appendix III. This potentially useful information was not understood by agencies, often was unavailable, sometimes was not accurate, and thus was not used to inform program management or budget decision-making. In a letter responding to a recent GAO report on OMB’s subsidy model, OMB acknowledged that it would be useful for the revised subsidy model to have a facility to display, at the option of the user, the calculation of subsidy percentages and components. Improvements Underway There have been a number of recent efforts to clarify and simplify implementation of the Credit Reform Act. The Balanced Budget Act of 1997 included some changes to the Credit Reform Act and OMB has changed its guidance as well. OMB is piloting a new reestimation methodology—called the “balances approach”—at HUD that it states will simplify the process. However, this new approach does not calculate the components of subsidy expense (interest, net defaults, fees and other collections, and other subsidy costs) over the entire term of the loans as does the current reestimate methodology. The data from which to calculate components would remain available with the balances approach, but what is lost is having the component data calculated as a part of the reestimate process. Further, if there is no requirement to report or review the data in that way, agencies would have less incentive than now to make the calculations and use the data. OMB also is formulating a new approach to discounting cash flows that it states will improve accuracy without adding difficulty for agency staff. The Credit Task Force of the Accounting and Auditing Policy Committee,which includes OMB, Treasury, credit agency participants, and GAO, has been studying credit reform implementation in preparation for the first audit of the fiscal year 1997 governmentwide consolidated financial statements. The group has proposed guidance for agencies on methods and documentation for estimating subsidy rates and creating a store of historical data. It also developed draft guidance on preparing and auditing subsidy estimates that will be useful for budget, accounting, and auditing staff. The Credit Reform Committee of the Chief Financial Officers’ Council also has devised ways agencies can simplify implementation of credit reform. Other resources are available to help agencies enhance their capacities to make subsidy estimates. For example, OMB has provided short-term technical assistance with estimation and modeling to VA, SBA, and HUD. As those efforts continue, staff in agencies who report that they lack adequate resources for research or systems development could adapt strategies or data system formats that have been used successfully in other agencies. However, management-level commitment at all of the credit agencies and OMB is critical to continuing these efforts and to ensuring that the implementation of credit reform is an agency priority. Conclusions Greater sustained commitment by management at OMB and the credit agencies is needed to produce the useful data needed to fully implement credit reform. Effective implementation requires timely, readily available, accurate estimates that are comparable among credit programs. Although there are indications that some agencies have taken this seriously, problems with the availability and reliability of subsidy estimates continue to permeate all agencies’ efforts at implementation. While agencies are working to improve their subsidy estimation processes, agency staff continue to report that credit reform implementation often is not a priority of top management. This is indicated both by the failure to ask questions about estimates in program and budget reviews and by the reported lack of sufficient computer systems to support the subsidy estimation process. Greater commitment by OMB and the credit agencies is needed to address pervasive problems with the availability and reliability of subsidy estimate data and documentation. Since agencies are most responsive to issues in which there is demonstrated interest, continued oversight would increase the likelihood that credit reform would be implemented as intended. Better and more reliable data are needed to facilitate this oversight. The availability of automatic funding for reestimates of subsidy costs creates an incentive for agencies with discretionary programs to initially underestimate subsidy costs. Whether or not agencies are responding to this incentive is unclear. Because the data generally are not reliable and because other factors, such as economic fluctuations (including changes in the interest rate) could have caused changes in reestimates, more in-depth study and better data would be needed to draw a firm conclusion. Accurate, consistent data on subsidy expense components could be used effectively by program managers and executive and congressional decisionmakers as originally intended—to monitor program implementation, consider program changes, and compare direct loan and loan guarantee programs designed for the same purpose. Currently, data are inaccurate or missing and agency staff said they do not understand the data. Therefore, component data have not been available to inform decision-making. While no single agency yet is successful in all aspects of credit reform implementation, some progress is being made at each of the agencies we studied. Over time, the scrutiny of financial statement audits will continue to bring greater discipline to the estimation process and greater accuracy to the reported subsidy costs. Recommendations We recommend that the Secretaries of Agriculture, Education, Housing and Urban Development, and Veterans Affairs, and the Administrator of Small Business improve oversight of credit reform implementation, including ensuring that (1) estimates are prepared accurately and (2) documentation supporting subsidy estimates included in the budget and financial statements is prepared and retained. So that agency staff can aggregate data from their cash flows into the OMB subsidy model accurately and consistently, we recommend that the OMB Director ensure that OMB staff (1) provide detailed guidance and definitions of the four subsidy components (interest, net defaults, fees and other collections, and other subsidy costs) and (2) revise the OMB subsidy model to provide agencies with the formula for calculating each component. We also recommend that the OMB Director ensure, to the extent possible, that agencies prepare accurate subsidy estimates, use consistent definitions of subsidy components, and have appropriate documentation. Finally, we recommend that the OMB Director work toward identifying ways OMB can further assist agencies to more rapidly and accurately implement credit reform. These might include providing additional direct assistance to the agencies, developing prototypes for estimating methodologies, and prompting interagency forums for the exchange of information on problems and best practice solutions by working level staffs. Agency Comments and Our Evaluation OMB and each of the five credit agencies we examined commented on a range of implementation problems and progress. OMB officials commented that the report’s focus on subsidy estimation is valuable. However, OMB officials stated that our analytical methodology was questionable because the report did not distinguish between the effect of interest rates on initial subsidy rates and the effect of default and other technical factors. Our methodology was designed to isolate the effects of interest rates by using component data and budget execution rates. It is true that interest rates would change each year even if the default and other technical assumptions remain constant. Unfortunately, we could not isolate these effects because agencies frequently did not provide these data or provided inaccurate data. We used budget execution rates as the starting point for our analyses. This reduced the effect of interest rate changes in the months between budget request and budget execution. OMB also stated that several of the report’s general conclusions about subsidy estimates were not supported by the evidence. We disagree. First, OMB officials noted that unless an adjustment is made for the effect of different discount rates, it is impossible to draw valid conclusions about the accuracy of subsidy rates by observing that they have fluctuated over time. We did not draw conclusions about the accuracy of subsidy rates. However, we did observe that the rates fluctuated over time and that some fluctuation would be expected. We said in the report that the reliability of credit data is questionable for a number of reasons. Our characterization of the data was based primarily on results of the audits of the fiscal year 1996 financial statements as well as some discrepancies we identified between rates provided to us by agencies and those reported in the President’s Budget. Second, OMB officials said that we could not draw conclusions about a program from the size or direction of subsidy reestimates unless the effect of interest rate reestimates is removed. Our report did not draw conclusions about programs from the size or direction of subsidy reestimates. Third, OMB officials were concerned that our discussion of the timing of reestimates could lead to the incorrect conclusion that the budget formulation process for subsidy rates for new loans is not being informed by the experience on existing loan cohorts. We disagree. Such a conclusion regarding the effect of not performing reestimates is, in fact, correct. Given that we found three of the five agencies received waivers of the reestimate requirement, it would appear that their budget formulation is not being informed by the most recent experience on existing loan cohorts. (See appendix IV.) Agriculture officials stated that the report’s comments and suggestions will improve budget formulation and accounting for programs under credit reform. They further stated that FSA is working to address the concerns noted in the report. (See appendix V.) Education officials strongly agreed with the report’s emphasis on the importance of the Federal Credit Reform Act of 1990. Their comments note that, over the past 5 years, Education has steadily increased staff, contractor, and system resources dedicated to developing accurate and timely credit estimates. Education officials also raised a question about their perception that our report implied that “significant shifts in subsidy reestimates over time are necessarily a bad thing.” Our report did not imply this. Rather, we said that over time, some fluctuation in subsidy rates would be expected, some estimates varied widely, and these sharp variations raise questions about the causes for these changes and the reliability of the underlying data. Education officials also provided some clarification of their use of the OMB subsidy model. (See appendix VI.) HUD officials acknowledged that the agency has experienced some reporting and estimation problems and stated that the agency has made significant progress since the enactment of the Federal Credit Reform Act of 1990. They noted that the draft report documents the difficulty that virtually every agency is experiencing in dealing with credit reform requirements. HUD officials stated that the problem with cost estimates has less to do with the level of effort devoted to data collection and the timeliness of reestimation than it has to do with inherent limitations of the net present value technique in cost estimation. We agree that credit reform has been a challenge to agencies. However, cash basis reporting for credit programs, while easier to accomplish, does not reflect their costs. When sufficient attention is devoted to net present value estimates of costs as required under the Federal Credit Reform Act, these subsidy costs will be a much better basis for budgeting. (See appendix VII.) SBA officials’ expressed concern that we quoted a Price Waterhouse report prepared at SBA’s request. SBA officials stated that the Price Waterhouse report was incorrect and, due to its special nature, was not corrected. In support of their position, SBA officials cited the quality of their data and staff and SBA’s commitment to have accurate and credible subsidy rates. However, as discussed earlier, we found an error in SBA’s subsidy estimation methodology and that their component data were not always correct. Also, when we discussed SBA’s concerns with a Price Waterhouse staff member, he stated that its report is accurate for the period it covered, spring 1997, and was based on interviews and a review of documentation. SBA officials said their practices represent the leading edge of compliance with the Federal Credit Reform Act and requested that we acknowledge this if we agreed. Our report recognized that some progress in implementation is being made at each of the agencies we studied, and we specifically acknowledged SBA’s commitment and efforts to improve. (See appendix VIII.) VA officials agreed with the report’s recommendations and provided their insights regarding the complexities of credit estimation and its evolution over the years at VA. They also provided some technical comments, which we have addressed as appropriate. (See appendix IX.) We are sending copies of this report to the Ranking Minority Member of the Senate Committee on the Budget; the Chairman and Ranking Minority Member of the House Committee on the Budget; the Director, Office of Management and Budget; the Director, Congressional Budget Office; the Secretaries of Agriculture, Education, and Housing and Urban Development, and the Acting Secretary of Veterans Affairs; the Administrator of Small Business; and interested congressional committees. Copies also will be made available to others upon request. This report was prepared under the direction of Christine Bonham, Assistant Director, Budget Issues, who may be reached at (202) 512-9576. Other major contributors to this report are listed in appendix X. Please contact me at (202) 512-9142 if you or your staff have any questions concerning the report. Background: Credit Reform The federal government uses direct loans and loan guarantees as tools to achieve numerous program objectives, such as assistance to housing, agriculture, education, small businesses, and foreign governments. At the end of fiscal year 1996, the face value of the government’s direct loans and loan guarantees totaled a reported $973 billion, of which $167 billion was in direct loans and $806 billion was in loan guarantees. After over 20 years of discussion about the shortcomings of using cash budgeting for credit programs and activities, the Federal Credit Reform Act of 1990 was enacted on November 5, 1990, as Title 13B of the Omnibus Budget Reconciliation Act of 1990, Public Law 101-508. The Federal Credit Reform Act changed the budget treatment of credit programs so that their costs can be compared more accurately with each other and with the costs of other federal spending. It also was intended to ensure that the full cost of a credit program over its entire life would be reflected in the budget when the loans were made so that the executive branch and the Congress might consider that cost when making budget decisions. In addition, it was recognized that credit programs had different economic effects than most budget outlays, such as purchases of goods and services, income transfers, and grants. In the case of direct loans, for example, the fact that the loan recipient was obligated to repay the government over time meant that the economic effect of a direct loan disbursement could be much less than a noncredit budget transaction of the same dollar amount. The change in economic behavior resulting from loan guarantees occurred when the loan was made, not when the government’s cost was included in the federal budget. Thus, for both direct loans and loan guarantees the budget did not reflect the change in economic behavior. Credit Reform Was Designed to Remove Difficulties Caused by Cash Treatment Before credit reform, it was difficult to make appropriate cost comparisons between direct loan and loan guarantee programs and between credit and noncredit programs. Credit reform requirements were formulated to address the factors that caused this problem. Two key principles of credit reform are (1) the definition of cost in terms of the present value of cash flows over the life of a credit instrument and (2) the inclusion in the budget of the costs of credit programs in the year in which the budget authority is enacted and the direct or guaranteed loans first may be disbursed. Credit Reform Was Designed to Allow Appropriate Cost Comparisons Before credit reform, credit programs—like other programs—were reported in the budget on a cash basis. This cash basis distorted costs and, thus, the comparison of credit program costs with other programs intended to achieve similar purposes, such as grants. It also created a bias in favor of loan guarantees over direct loans regardless of the actual cost to the government. Loan guarantees appeared to be free in the short run while direct loans initially appeared to be as expensive as grants because the budget did not recognize that at least some of the guaranteed loans would default and that some of the direct loans would be repaid. For direct loans, the budget for most discretionary accounts used revolving funds, which showed budget authority and outlays in the amount that loan disbursements, in the current year, exceeded repayments received from all past loans in that budget year. This cash approach overstated direct loan costs in the initial years of a program when loan disbursements were likely to be greater than repayments. Conversely, this treatment understated costs in later years when loan repayments were more likely to be much larger relative to disbursements. In contrast, for loan guarantees, the budget did not record any outlays when the guarantees were made (except the negative outlay resulting from any origination fees), even though the program was likely to entail future losses. Budget authority and outlays were recorded only when defaults occurred. Credit reform changed this treatment for direct loans and loan guarantees made on or after October 1, 1991. It required that budget authority to cover the cost to the government of new loans and loan guarantees (or modifications to existing credit instruments) be provided before the loans, guarantees, or modifications are made. Credit reform requirements specified a net cost approach using estimates for future loan repayments and defaults as elements of the cost to be recorded in the budget. This puts direct loans and loan guarantees on an equal footing; it permits the costs of credit programs to be compared with each other and with the costs of non-credit programs when making budget decisions. Credit Reform Identifies the Government’s Cost of Credit Activities Credit reform requirements separate the government’s cost of extending or guaranteeing credit, called the subsidy cost, from administrative and unsubsidized program costs. Administrative expenses receive separate appropriations. They are treated on a cash basis and reported separately in the budget. The unsubsidized portion of a direct loan is that which is expected to be recovered from the borrower. The Federal Credit Reform Act defines the subsidy cost of direct loans as the present value of disbursements—over the loan’s life—by the government (loan disbursements and other payments) minus estimated payments to the government (repayments of principal, payments of interest, other recoveries, and other payments). In making these calculations, agencies must include the cost to the federal government of borrowing the funds. The act defines the subsidy cost of loan guarantees as the present value of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (for loan origination and other fees, penalties, and recoveries). Agencies prepare these cost estimates on a net present value basis as a part of their budget request. For the budget years we reviewed, agencies then recalculated the subsidy rate when they extended credit by updating the approved rate for any changes in interest rates and legislation. They make direct loans and loan guarantee commitments as possible under this appropriation. Later, after the end of the fiscal year, agencies reestimate subsidy costs based on actual experience and expected economic changes. Credit Programs Now Use Three Budgetary Accounts The Federal Credit Reform Act set up a special budget accounting system to record the budget information necessary to implement credit reform. It provides for three types of accounts—program, financing, and liquidating—to handle credit transactions. Credit obligations and commitments made on or after October 1, 1991—the effective date of credit reform—use only the program and financing accounts. The program account receives separate appropriations for administrative and subsidy costs of a credit activity and is included in budget totals. When a direct or guaranteed loan is disbursed, the program account pays the associated subsidy cost for that loan to the financing account. The financing account, which is nonbudgetary, is used to record the cash flow associated with direct loans or loan guarantees over their lives. It finances loan disbursements and the payments for loan guarantee defaults with (1) the subsidy cost payment from the program account, (2) borrowing from the Treasury, and (3) collections received by the government. Figure I.1 diagrams this cash flow. If subsidy cost calculations are accurate, the financing account will break even over time as it uses its collections to repay its Treasury borrowing. Direct loans and loan guarantees made before October 1, 1991, are reported on a cash basis in the liquidating account. This account continues the cash budgetary treatment used before credit reform. It has permanent indefinite budget authority to cover any losses. Excess balances are transferred periodically—at least annually—to the Treasury. In addition to the three accounts specified in the Federal Credit Reform Act, OMB has directed that discretionary credit programs or activities with negative subsidies must have special fund receipt accounts. These accounts hold receipts generated when the program or activity shows a profit or when a downward reestimate of subsidy costs indicates that the financing account balance is too high. OMB guidance provides that discretionary programs cannot use these receipts unless they are appropriated, while mandatory programs may use the receipts without appropriation action. OMB and Treasury Provide Implementation Guidance OMB and the Department of the Treasury provide guidance on implementing credit reform. OMB’s written guidance is contained primarily in OMB Circulars A-11, A-34, and A-129. OMB also has issued memorandums to provide additional implementation guidance addressing specific situations. Treasury’s guidance is provided in materials such as Basic Transactions Relating to Guaranteed Loans and Subsidies, which contains a number of illustrative cases developed by its Financial Management Service and distributed to agencies as examples of how to account for credit reform transactions. Accounting guidance, consistent with the intent of the Federal Credit Reform Act, is found in Accounting for Direct Loans and Loan Guarantees, Statement of Recommended Accounting Standards, Number 2. This guidance was developed by the Federal Accounting Standards Advisory Board (FASAB) and approved in July 1993, by OMB, the Department of the Treasury, and GAO. Implementation Guidance Has Changed Fiscal year 1998 is the seventh year that credit programs have been required to comply with credit reform. Agencies that operate credit programs and those that provide implementation guidance—OMB and Treasury—have had to address a variety of situations for which the Federal Credit Reform Act does not provide explicit direction. OMB and Treasury have refined their guidance to agencies based on greater experience with the processes and data requirements for implementing credit reform and on more information on agencies’ limitations and abilities. The Balanced Budget Act of 1997 amended the Federal Credit Reform Act to clarify and simplify the requirements for subsidy cost estimation. Among other changes, the Federal Credit Reform Act was amended to require agencies to make loans and guarantees using the technical assumptions such as default, recoveries, and fees included in the President’s Budget for the year in which funds are obligated. As a result, the dollar amount of loans approved by the Congress will not be increased or decreased by subsequent changes to technical assumptions. The act also was amended to require agencies to return to the Treasury any excess funds in accounts for pre-1992 credit, and to change budgetary treatment of credit from the Federal Financing Bank. At the same time, OMB has drafted revised guidance to credit agencies. The changes include eliminating the requirement to reestimate annually the change in subsidy cost due to changes in interest rates as disbursements are made for a cohort’s loans and loan guarantees. Instead, agencies are required to do only one interest rate reestimate when the cohort is 90 percent disbursed. Interest rate reestimates before a cohort is fully disbursed are of questionable validity since the discount rate will continue to change. The reestimates thereby cause large swings in subsidy estimates with no value added to management decision-making or the reliability of budgetary or financial reporting. OMB also has developed an alternative, simplified method for agencies to calculate reestimates—called the “balances approach.” This new approach, which is being tested at HUD, looks forward, projecting and discounting remaining cash flows from a cohort and comparing them to the current balance owed to Treasury. The method used to date looks both backward and forward, requiring agencies to revise estimates of all cash flows for a cohort—those that already have occurred and those in the future. However, this new approach does not calculate the components of subsidy expense (interest, net defaults, fees and other collections, and other subsidy costs) for the entire term of the loans as does the current methodology. The data from which to calculate components would remain available with the balances approach, but what is lost is having the component data calculated as a part of the reestimate process. Further, if there is no requirement to report or review the data in that way, agencies would have less incentive than now to make the calculations and use the data. Several interagency groups also reviewing agencies’ implementation and the current requirements of credit reform have made recommendations that have been adopted or endorsed by OMB and Treasury. The Credit Reform Committee of the Chief Financial Officers Council has recommended certain actions that would simplify budget execution and accounting. The Credit Task Force of the Accounting and Auditing Policy Committee (formerly the Credit Subgroup of the Government-wide Audited Financial Statements Task Force) has issued three papers. The first outlines an ideal model for estimating and documenting subsidy rates.The paper recognizes that credit agencies are many years away from being able to implement such a method, but discusses reasonable methods for subsidy rate estimation and discusses the types of actual loan data that might be maintained to support agency subsidy estimates. The second paper provides draft guidance for agencies’ budget and accounting staff and auditors for preparing and auditing direct loan and loan guarantee subsidy estimates. The third paper outlines recommended changes in the accounting standards for direct loans and loan guarantees and interpretations of the standards on the display of the components of subsidy expense. Program Descriptions and Graphs of Estimated Subsidy Rates of Selected Programs This appendix provides a brief description of the programs we selected, indicates whether the program is discretionary or mandatory, illustrates changes in the programs’ estimated subsidy rates from two perspectives, and includes a brief explanation of the more significant changes in rates as provided by agencies. The first graph for a program shows the most recently estimated total subsidy rates for each year’s cohort of credit. These figures provide a snapshot of agencies’ most recent estimates of the government’s subsidy expense for these credit programs. Differences in the subsidy rate estimates for the different cohorts of a program may be due to improved estimates (perhaps from greater experience) or to changes in program characteristics or economic conditions. The second graph for a program profiles the estimated subsidy rates of a given cohort over time. We graphed the fiscal year 1992 cohorts because they were more likely to have the most extensive subsidy reestimate data. We graphed fiscal year 1994 data for Education’s Ford Direct Loan Program because the program began in that year and, therefore, it is the first year for which data were available. These figures depict changes over time in the agencies’ knowledge about the cost of loans funded in a given fiscal year. Estimated subsidy rates for some programs have greater variability, as seen in the different vertical scales of the graphs. We did not prepare either the total estimated subsidy rate or the cohort profile graphs for SBA’s Disaster Loan Program. We did not prepare the cohort profile for the USDA/Farm Service Agency (FSA) Farm Operating Loan Program. Neither of the agencies provided sufficient and/or consistent data. The figures in this appendix are based on data as reported and verified by agencies. We did not independently verify the accuracy of these data. The data points used to create these figures are shown in bold on the tables in appendix III. Farm Service Agency, U.S. Department of Agriculture Farm Operating, Direct Loans, Discretionary Loans are made to family farmers who are unable to obtain credit from private and cooperative sources for farm operating purposes such as purchasing livestock, poultry, and farm ranch equipment; purchasing feed, seed, or fertilizer; meeting other farm or ranch operating expenses; and paying family living expenses. The use of loan funds is intended to help provide farmers with the opportunity to conduct successful farm operations. Agency budget officials attributed the relatively large drop in subsidy rates between the fiscal year 1997 budget execution estimate and the fiscal year 1998 budget request to the spread between the interest rates charged to borrowers and Treasury interest rates that represent the agency’s cost of capital. The spread in interest rates increased nearly 100 percent for fiscal year 1998. Further, a program change for fiscal year 1998 reduced write-offs without acquired property by more than 50 percent. (See figure II.1.) Because of changes over time in the way the FSA’s Farm Operating estimated subsidy rate data were aggregated, we were not able to graph a profile of an individual cohort over time. Rural Housing Service, U.S. Department of Agriculture Single Family Housing, Direct Loans, Discretionary Single family loans are made to very low- and low-income families who are without adequate housing and cannot obtain credit from other sources. Funds may be used to build, purchase, repair, or refinance homes in rural areas. Borrowers are required to “graduate” from the direct loan program when their incomes are sufficient to afford credit from the private sector. For figure II.2, agency staff attributed the relatively large drop in subsidy rates between “Reestimated FY 1995” and “FY 1996 Execution” primarily to the decrease in Treasury discount interest rates and the increase in borrower interest rates. This increase in borrower interest rates is due to a change in the Rural Housing Service’s (RHS) regulations in fiscal year 1996 which reduced the payment assistance to borrowers. Figure II.3 is influenced by P.L. 102-142, §742, which required execution rates to be at or below the rates published in the President’s fiscal year 1992 Budget. The agency used the rate in the President’s Budget because rates based on actual data would have been higher. In effect, the estimated subsidy rate was limited by law. According to agency staff, this resulted in a large first reestimate of fiscal year 1992 subsidy expense. Federal Family Education Loan Program, Department of Education Stafford, Guaranteed Loans, Mandatory The Federal Family Education Loan Program (FFELP) is intended to encourage private lending to vocational, undergraduate, and graduate students enrolled at eligible postsecondary institutions to help pay for educational expenses. The loans are insured by a state or private nonprofit guaranty agency and reinsured by the federal government. Generally, a borrower is not required to make any payments on the principal while still in school. As shown in figure II.4, a relatively large decrease in the estimated subsidy rate occurred between the rate used for fiscal year 1997 budget execution and the rate requested in the President’s fiscal year 1998 Budget. According to Department of Education staff, this decrease was due primarily to (1) decreases in interest rates from when the reestimates were calculated to when the budget request was formulated and (2) legislative proposals included in the budget request. For figure II.5, Education staff attributed the relatively large increase from the third reestimate to the fourth reestimate of the fiscal year 1992 cohort’s subsidy rate to a new reestimate methodology that showed higher defaults. The fifth reestimate used a methodology that showed defaults comparable to those in the third reestimate. Stafford, Direct Loans, Mandatory To help defray costs of education at a participating school, loans are made directly from the federal government to vocational, undergraduate, and graduate students. Generally, a borrower is not required to make any payments on the principal while still in school. For figure II.6, Department of Education staff explained that the drop in estimated subsidy rates between the latest reestimate for fiscal year 1996 and fiscal year 1997 budget execution was due primarily to decreases in interest rates from when the reestimates were calculated to when the budget request was formulated. Because of programmatic design, small changes in interest rates result in relatively larger changes in subsidy rates for direct loans than for loan guarantees. For figure II.7, the subsidy estimate increased in the second reestimate and decreased in the third reestimate. Education staff said they used a different methodology for the second reestimate that showed higher defaults. They attributed the decrease between the second and third reestimates of the fiscal year 1994 cohort to using a methodology that showed defaults comparable to those in the first reestimate. Federal Housing Administration (FHA) Mutual Mortgage Insurance Fund, Section 203(b), Guaranteed Loans, Discretionary To help people become homeowners, HUD provides insurance to lenders against losses on mortgage loans. These mortgage loans may be used to finance the purchase of one-to-four family housing that is proposed, under construction, or existing, as well as to refinance indebtedness on existing housing. As shown in figure II.8, there was a relatively large increase in the reestimated subsidy rates (in this case, a smaller negative subsidy) between the fiscal year 1994 and 1995 cohorts. According to HUD officials, this increase was driven primarily by an increase in total claim rates—from 4.95 percent for the fiscal year 1994 cohort in the 1998 Budget to 8.01 percent for the fiscal year 1995 cohort in the 1998 Budget. Housing Programs, Department of Housing and Urban Development FHA General and Special Risk Insurance Fund, Multifamily Refinance 223(f), Guaranteed Loans, Discretionary Lenders are insured against loss on the purchase or refinance of existing multifamily housing projects. Only rental housing projects not requiring substantial rehabilitation are eligible. Figure II.10 shows a sharp decline in estimated subsidy rates for this program between the budget execution rates in fiscal years 1996 and 1997. According to HUD officials, this decline reflects the use of updated assumptions in fiscal year 1997. These new assumptions incorporated an additional 5 years of performance data as well as the initial experience of FHA’s mortgage sales. As shown in figure II.11, there was a sharp drop between the second and third reestimated subsidy rates of the fiscal year 1992 cohort. HUD officials attributed this downward shift to a lower estimate for defaults and a higher estimate for fees. Small Business Administration 7(a) General Business, Guaranteed Loans, Discretionary Lenders are guaranteed against loss from loans to small businesses that are unable to obtain financing in the private credit market but can demonstrate the ability to repay loans. Guaranteed loans are made available to low-income business owners or businesses located in areas of high unemployment; nonprofit sheltered workshops and other similar organizations that produce goods or services; and to small businesses being established, acquired, or owned by handicapped individuals. For figure II.12, Small Business Administration (SBA) officials attributed the increase in reestimated subsidy rates between fiscal years 1994 and 1995 primarily to an increase in the assumed purchase rate from 16.85 percent to 17.25 percent. The purchase rate is the percent of remaining principal and interest on defaulted guaranteed loans that SBA expects to pay in claims from lenders. The following year’s decrease, they explained, resulted primarily from the imposition of new and/or modified program fees. For figure II.13, SBA officials explained that the changes in subsidy rates estimated for the fiscal year 1992 cohort were due in part to changes in the discount rate and in part to differences between anticipated and actual purchase activity, fee collections, and recoveries. Small Business Administration Disaster, Direct Loans, Discretionary Loans are made to homeowners, renters, businesses of all sizes, and nonprofit organizations that have suffered uninsured physical property loss as a result of a disaster in an area declared eligible for assistance by the President or SBA. The loans may be used to repair and/or replace property to predisaster conditions. SBA did not provide sufficient data on this program to allow us to graph either the total subsidy rate estimates for each cohort or a profile of an individual cohort. Veterans Benefits Administration, Department of Veterans Affairs Guaranty and Indemnity Fund, Guaranteed Loans, Mandatory These loans assist veterans and certain others in obtaining credit for the purchase, construction, or improvement of homes on more favorable terms than are generally available to nonveterans. Lenders are guaranteed partial repayment of loans made to these individuals. As shown in figure II.14, a relatively sharp decline in reestimated subsidy rates occurred from the fiscal year 1993 cohort to the fiscal year 1995 cohort. Department of Veterans Affairs (VA) staff attributed this decline primarily to changes in foreclosure rates, discount rates, and funding fees, as well as the application of actual cohort data. Figure II.15 shows a decline between the third and fourth reestimates of the fiscal year 1992 cohort. VA officials said this decline was a result of increased inflows from recoveries and the effect of having more actual cohort data. Loan Guaranty Direct Loans, Mandatory This program makes home loans on favorable terms to members of the general public—both veterans and nonveterans—purchasing a VA-owned property. These properties include homes that VA has acquired as a result of foreclosures on VA guaranteed loans. Figure II.16 shows that a dramatic drop in reestimated subsidy rates occurred between the fiscal years 1994 and 1995 cohorts. According to VA staff, this decrease is associated primarily with the inclusion of actual cohort data as well as a significant increase in the estimated proceeds from loan sales. Estimated Subsidy Rates of Selected Programs This appendix presents a summary of the completeness of estimated subsidy rate data and supporting documentation provided by each agency. The estimated subsidy rates, as reported and confirmed by each agency, are also shown for the 10 programs. We did not examine the quality of the data underlying the subsidy estimates or the agencies’ estimation process. We determined whether the budget request estimate, the budget execution estimate, and all reestimates for each fiscal year were supported by output from OMB’s credit subsidy model and cash flow spreadsheets. Table 1 summarizes the completeness of these data. Although we did not examine the data quality, some quality issues jumped out. We found 11 instances where data confirmed by agencies did not agree with those reported in the President’s Budget or Credit Supplement. When questioned about this, agencies provided additional data supporting the President’s Budget in four instances. For the remaining seven cases, we show the estimated subsidy rates that were supported in agency documentation. We also did not evaluate the timeliness or frequency of the reestimates. Currently, agencies do not necessarily reestimate prior year cohorts on the same schedule. According to the June 23, 1997, version of OMB Circular A-11, section 33.5(s), “Reestimates must be made at the beginning of each fiscal year, as long as any loans in the cohort are outstanding, unless a different plan is approved by OMB.” In other words, the first reestimate of the fiscal year 1996 cohort generally should be included in the fiscal year 1998 budget request. OMB has permitted some agencies to vary from this schedule. USDA, for example, received a waiver from OMB allowing its reestimates to be prepared in the middle of the fiscal year rather than the beginning. Therefore, updated cost information from USDA’s first reestimate of the fiscal year 1996 cohort would not be included in the budget submission until the fiscal year 1999 budget request. Although we did not independently verify the data provided by agencies, the numerous modified audit opinions these agencies have received on their credit programs, the discrepancies we found between the Budget and the data provided to us by the agencies, and other work we have doneraise serious concerns about the quality of the data. For example, USDA’s Office of the Inspector General (OIG) rendered a qualified opinion on the fiscal year 1996 financial statements of the rural development mission area because the OIG was not able to obtain sufficient, competent evidence to support the agency’s credit program receivables and estimated losses on loan guarantees and the related credit program subsidy and appropriated capital used. For fiscal year 1996, Education’s OIG also was unable to render an audit opinion on the department’s credit activities due to auditor concerns about the integrity of the data supporting estimates of the Federal Family Education Loan Program. HUD received a qualified audit opinion from its OIG in fiscal year 1996 because FHA’s credit-related accounts were not converted to a present value basis, as required by the Statement of Federal Financial Accounting Standards, No. 2, Accounting for Direct Loans and Loan Guarantees. Consequently, the OIG was unable to audit the credit-related account balances. Tables III.1 through III.10 present the subsidy rate estimates reported and confirmed by each agency. The columns represent each budget year—1992 through 1998—since credit reform was enacted. Each table is horizontally divided into five sections. The top section of each table shows the total subsidy rate estimated and reestimated for each cohort, as calculated at different points in time. The four bottom sections represent the components of subsidy expense—interest, net defaults, fees and other collections, and other subsidy costs. These four components add to the total subsidy rate. The following is an excerpt of the top section from table III.3, presented to illustrate how subsidy rate data are arrayed. Because the bottom sections organizationally mirror the top section, only the top section is shown in this example. The seven columns labeled “Cohorts” contain the available budget request, budget execution, and reestimated subsidy rates for the credit funded by appropriations in the indicated fiscal year. The left-hand column contains the type of estimate prepared and gives some indication about when the estimate was calculated. Thus, the first row, labeled “Request,” shows the subsidy rate used to prepare the budget year request for that fiscal year. In the sample table above, the fiscal year 1992 budget request used an estimated subsidy rate of 22.60 percent. Reading down the fiscal year 1992 column, the budget execution rate shows the subsidy rate estimated for the fiscal year 1992 cohort after the beginning of fiscal year 1992—after the appropriation was received but before loans were made. Reestimate data are shown in the order in which they were prepared. The shaded cells represent estimates that will be made in the future when future budgets are prepared (e.g., the budget execution estimate for fiscal year 1998 and the first reestimate of fiscal year 1997 should have been done for the fiscal year 1999 budget.) The numbers from these tables that support the graphs in appendix II are shown in bold. For cells that are neither shaded nor filled in with data, the estimated subsidy rate was not provided. The absence of the data might reflect incomplete agency files or indicate that the agency did not prepare estimates. Notes to the tables provide additional explanatory information in some instances when data were not provided. OMB has periodically released updated versions of its credit subsidy model. Several of these versions aggregated the component data differently. For example, early versions of the model combined the two components “Fees and Other Collections” and “Other Subsidy Costs.” We showed all comparable data provided by the agencies. (In percent) USDA provided subsidy rate estimates that were divided into four sub-risk categories. Therefore, the data were not comparable to the other reported data and were not included in this table. USDA provided quarterly subsidy rate estimates. Therefore, the data were not comparable to the other reported data and were not included in this table. (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) USDA provided quarterly subsidy rate estimates. Therefore, the data were not comparable to the other reported data and were not included in this table. Comparable data for “Fees” and “Other Subsidy Cost” components were not provided. (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) 17.74 (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) 14.29 (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 3 (In percent) Comparable data for “Fees” and “Other Subsidy Cost” components were not provided. (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) Comparable data for “Fees” and “Other Subsidy Cost” components were not provided. (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) Reestimate 5 (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) Comparable data for “Fees” and “Other Subsidy Cost” components were not provided. (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense Reestimate 5 (In percent) Subsidy rate estimates were calculated quarterly; therefore, the data provided were not comparable to the other reported data and were not included. VA consolidated its programs midway through fiscal year 1997. The rate of 1.47 percent was effective from October 1996 through March 1997. From April through September 1997, the consolidated fiscal year 1998 rate of 0.49 percent was used. (In percent) Interest Component of Subsidy Expense Net Defaults Component of Subsidy Expense Fees Component of Subsidy Expense All Other Component of Subsidy Expense –0.32(In percent) Subsidy rate estimates were calculated quarterly; therefore, the data provided were not comparable to the other reported data and were not included. VA consolidated its programs midway through fiscal year 1997. The rate of 1.56 percent was effective from October 1996 through March 1997. From April through September 1997, the consolidated fiscal year 1998 rate of 2.61 percent was used. (In percent) Comments From the Office of Management and Budget The following are GAO’s comments on the Office of Management and Budget’s March 11, 1998, letter. GAO Comments 1. See “Agency Comments and Our Evaluation” section of the report. 2. We did not draw conclusions about the accuracy of subsidy rates from our observation that subsidy rates have fluctuated over time. Our report stated that the reliability of credit data was questionable for a number of reasons. Our characterization of the data was based primarily on results of the audits of the fiscal year 1996 financial statements as well as some discrepancies we identified between rates provided to us by agencies and those reported in the President’s Budget. 3. The report does not propose a causal relationship between reestimates and subsidy rate fluctuations. Further, changes in assumptions cause the subsidy rate for a cohort of loans to fluctuate from year to year, not reestimates. 4. We did not assess whether any assumptions used in subsidy rate estimation were accurate. Further, we did not draw conclusions about a program from the size or direction of subsidy reestimates. We acknowledged that the fluctuations in subsidy rates could be due to credit extended at different interest rates than anticipated or to unanticipated changes in the economy, as well as other factors. We clarified the report language to state that interest rate changes are an example of such unanticipated economic changes. 5. Report text revised to clarify that the patterns are similar. 6. We compared the subsidy rate estimated for budget execution against the first subsidy reestimate for the following reasons. First, we used budget execution (rather than budget request) because it eliminated the effect of interest rate changes in the months between when the budget request rate was formulated and the time the government is obligated or committed for the loans. Second, credit reform guidance requires agencies to have appropriations of budget authority to cover the full estimated net present value cost of outstanding credit. Third, we used the first reestimate because if an agency sought to benefit from initially underestimating subsidy costs and there was oversight by OMB to ensure that agencies have sufficient appropriations of budget authority, the agency would have to obtain budget authority to cover the shortfall as soon as possible—that is, in the first reestimate. 7. Given that three of the five agencies received waivers of the reestimate requirement, it would appear that their budget formulation is not being informed by the most recent experience on existing loan cohorts. The Credit Task Force of the Accounting and Auditing Policy Committee, which includes OMB and GAO representation, proposed that agencies that have OMB approval be allowed to use a combination of actual and projected data as the basis for reestimates. By allowing agencies to begin the reestimation process earlier, this approach should reduce the need for waivers—permitting agencies to use more recent actual data to inform budget formulation and to include them in the President’s Budget and financial statement audits. 8. We agree that this would be useful information. However, we could not obtain sufficient component data from agencies to evaluate whether subsidy estimation has improved over time, as indicated by smaller annual changes in reestimates due to technical factors. Our report does include a section discussing a number of recent efforts to clarify and simplify implementation of the Credit Reform Act. In other ongoing work, we are evaluating data reliability, barriers to credit reform implementation, and agency plans to overcome those barriers. We will also report on notable best practices in credit agencies as appropriate. 9. We do not dispute that OMB staff devote a substantial amount of time to credit issues. However, our conclusion that greater sustained commitment is needed reflects our concerns about the availability and reliability of credit data despite the fact that agencies now have prepared eight budgets since credit reform became effective for fiscal year 1992. Further, component data were not used to inform program management or budget decision-making. 10. OMB’s attachment containing specific comments has not been included, but our report has been modified as appropriate to reflect the comments contained in the attachment. Comments From the Department of Agriculture The following are GAO’s comments on the Department of Agriculture’s March 11, 1998, letter. GAO Comments 1. Report text was revised to reflect the agency’s comment. Comments From the Department of Education The following are GAO’s comments on the Department of Education’s March 5, 1998, letter. GAO Comments 1. See “Agency Comments and Our Evaluation” section of the report. 2. The report text was clarified to reflect Education’s comment. 3. Education provided all total subsidy rate estimates we requested but did not provide subsidy components for each of these rates. Moreover, although Education explained why the requested documentation—OMB model output—would not be helpful, it did not provide alternative supporting documentation. Comments From the Department of Housing and Urban Development The following are GAO’s comments on the Department of Housing and Urban Development’s March 4, 1998, letter. GAO Comments 1. An explanation of agencies’ reported need for waivers has been added to the report. 2. The report was clarified to specify that agencies can improve their ability to forecast defaults, recoveries, prepayments, and fee revenue through better modeling and more and better historical data. 3. Our report acknowledges the implementation difficulties involved with credit reform. However, once agencies establish a systematic approach to subsidy estimation based on auditable assumptions, present-value-based budgeting for credit will provide significantly better information than the former cash-based system. 4. Our statement that causes for changes in estimates cannot readily be identified was based on the fact that much of the component data that would point to causes for such changes was missing, inaccurate, or inconsistent with other data reported by agencies. 5. We clarified the report to indicate that the patterns were similar and that the data were inconclusive. As the report states, any firm conclusion about the reasons for changes in reestimates would require better data and more in-depth study. 6. HUD’s attachment containing technical comments/corrections has not been included, but our report has been modified as appropriate to reflect the comments contained in the attachment. Comments From the Small Business Administration The following are GAO’s comments on the Small Business Administration’s March 5, 1998, letter. GAO Comments 1. See “Agency Comments and Our Evaluation” section of the report. The report text also was clarified to reflect SBA’s position and our evaluation. 2. See “Agency Comments and Our Evaluation” section of the report. 3. Data specific to individual agencies are shown in tables contained in appendix III. In other ongoing work, we are evaluating data reliability, barriers to credit reform implementation, and agency plans to overcome those barriers. We also will report on notable best practices in credit agencies as appropriate. 4. Throughout our report, we cite changes in economic conditions and/or interest rates as the first item in a list of possible causes of changes in subsidy rates, thus emphasizing their importance. 5. See “Agency Comments and Our Evaluation” section of the report. 6. See comment 3. The results of the February 1998 meetings will be considered in a future report on this work. 7. While a comparison of modeling methodologies used by the five credit agencies would be interesting, it is outside the scope of this report. 8. A review of the adequacy of accounting standards and budget guidance was outside the scope of this report. However, in other ongoing work, we are evaluating barriers to credit reform implementation and agency plans to overcome those barriers. The adequacy of accounting standards and budget guidance, if identified as barriers, would be considered in that work. Further, as we noted in our report, changes to accounting standards and budget guidance for credit programs are being considered. 9. A brief discussion of these changes is contained in appendix I. 10. Recognition that SBA reestimated the Disaster Loan Program for the fiscal year 1999 budget was added to the “Results in Brief” section. 11. The report text was revised as suggested. 12. Appendix III includes tables by agency and program that illustrate where component data were unavailable. 13. Our report does address oversight by OMB. The Subgroup on Credit Reform of the Government-wide Audited Financial Statements Task Force (now the Credit Task Force of the Accounting and Auditing Policy Committee) has been working on data collection, analysis, and documentation issues for several years and has proposed guidance. 14. Recent financial statement audit reports cited in our report text have raised questions about data reliability. Our report also describes discrepancies between data in the President’s Budget and data provided and confirmed by agencies. 15. We agree that poor document retention is one cause of differences between agency-confirmed rates and those reported in the President’s Budget. 16. The data provided by SBA does not permit us to prepare graphs similar to other programs. We graphed estimated subsidy rates from two perspectives. The first graph (for example, see figure II.12) showed the most recently estimated total subsidy rate for each year’s cohort of credit—the rates for all of the outstanding cohorts were calculated and based on historical and economic data updated as of the same point in time. Using the budget execution rates as suggested by SBA would not be a similar approach because the rates were calculated at different points in time. The second graph (for example, see figure II.13) showed the estimated and reestimated subsidy rates for a given cohort over time beginning with budget execution. It would not be possible to do this for the Disaster Loan Program since SBA provided only the initial rate graphed in this analysis. 17. Specific support exists on pp. 10 through 16. Comments From the Department of Veterans Affairs The following are GAO’s comments on the Department of Veterans Affairs’ March 11, 1998, letter. GAO Comments 1. We revised table 1 to show that VA had provided all requested rates and documentation for fiscal years 1992 and 1993. We made the change for fiscal year 1992 because VA stated that the cash flows showing only 5 of the 15 years of the credit maturity represented all of the output created by the OMB/VA model for that year. For fiscal year 1993, VA provided only one of the four quarterly execution cash flows requested. However, we discussed VA’s comments with VA staff who told us that they did not revise the cash flows for each of the four quarters of fiscal year 1993. 2. Report text was revised. 3. Account title was revised. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Related GAO Products Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains to Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Credit Reform: Review of OMB’s Credit Subsidy Model (GAO/AIMD-97-145, August 29, 1997). Credit Subsidy Estimates for the Sections 7(a) and 504 Business Loan Programs (GAO/T-RCED-97-197, July 15, 1997). Debt Collection: Improved Reporting Needed on Billions of Dollars in Delinquent Debt and Agency Collection Performance (GAO/AIMD-97-48, June 2, 1997). Credit Reform: Case-by-Case Assessment Advisable in Evaluating Coverage, Compliance (GAO/AIMD-94-57, July 28, 1994). Federal Credit Programs: Agencies Had Serious Problems Meeting Credit Reform Accounting Requirements (GAO/AFMD-93-17, January 6, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on the government's measurement of subsidy costs for federal direct loans and loan guarantees, focusing on whether: (1) agencies completed estimates and reestimates of subsidy costs; (2) GAO could readily discern any trends including improvements in subsidy estimates; (3) GAO could readily identify the causes for changes in subsidy estimates; and (4) agencies with discretionary credit programs initially underestimated credit subsidy costs in response to the incentive created by the availability of permanent, indefinite budget authority for credit reestimates. GAO noted that: (1) after over 6 years of experience with credit reform, agencies continue to have problems in estimating the subsidy cost of credit programs; (2) the lack of timely reestimates as well as the frequent absence of documentation and reliable information limit the ability of agency management, the Office of Management and Budget (OMB), and Congress to exercise intended oversight; (3) GAO found problems with the availability and reliability of subsidy estimates, reestimates, and supporting documentation in its cross-cutting review of 10 programs for fiscal years (FY) 1992 through 1998; (4) in the audits of the FY 1996 financial statements, three of the five largest credit agencies received disclaimers or qualified opinions related to their credit programs; (5) auditors were unable to find support for agency data on such items as delinquencies and prepayments for loans receivable and liabilities for loan guarantees; (6) problems with the reliability and validity of the underlying credit data raise questions about the basis for the subsidy estimates included in the budget; (7) GAO would expect to find that, for a given cohort, the annual changes in reestimates due to technical factors would be smaller; (8) because component data were not available, GAO could not determine whether this occurred; (9) however, GAO did note that overall subsidy rates for a given cohort varied widely; (10) GAO observed a similar pattern of fluctuations in subsidy estimates at the program level; (11) subsidy rate estimates for any given program continued to fluctuate widely from year to year with no pattern or particular trend; (12) the intersection of credit reform and the Budget Enforcement Act of 1990--that is, the fact that original subsidy appropriations must compete under the discretionary caps while reestimates are outside them--may offer an incentive for agencies with discretionary programs to underestimate subsidy costs initially to permit more loans or loan guarantees within a given appropriation level; (13) however, available data were not sufficient to assess whether a credit program's budgetary treatment affected its initial subsidy estimates; (14) better information on factors underlying changes in subsidy rates is needed to identify and understand why these estimates change; (15) while OMB provides a user's guide and some training on the subsidy model, it has not provided agencies with clear definitions of each component or sufficient guidance on how to use the model; and (16) while no single agency is successful in all aspects of credit reform implementation, some progress is being made at each of the agencies studied.
Background The National Youth Anti- Drug Media Campaign As part of the Treasury and General Government Appropriation Act of 1998, the Drug Free Media Campaign Act of 1998 required, among other things, the Office of National Drug Control Policy to conduct a national media campaign for the purpose of reducing and preventing drug abuse among young people in the United States. The National Youth Anti-Drug Media Campaign may be the most visible federal effort devoted to preventing drug use among the nation’s youth. It aims to educate and enable America’s youth to reject illegal drugs; to prevent youth from initiating use of drugs, especially marijuana and inhalants; and to convince occasional users of these and other drugs to stop using drugs. Administered by ONDCP, and implemented in three phases, the campaign has as its centerpiece a paid advertising effort in which campaign funds were used to purchase media time and space for advertisements that delivered anti-drug messages to the campaign’s target audiences—youth aged 9 to 18 and their parents and adult caregivers—through strategic placement of anti-drug advertisements on television and radio and in print media. The campaign’s first two phases were pilot phases that had as their objectives developing advertising concepts, creating limited advertisements, testing public awareness of the advertisements in 12 metropolitan areas, and eventually extending the pilot program nationwide. Phase III of the campaign, which began in mid-1999, continued the nationwide advertising campaign begun during phase II and integrated the advertising with outreach efforts. In addition to the advertising, the fully integrated phase III campaign included community outreach, work with the entertainment and media industries to encourage the accurate depiction of the consequences of drug use, outreach to faith-based organizations, and work with youth organizations. During phase III, ONDCP had overall responsibility for developing and implementing the campaign, and to do so, it enlisted the support of nonprofit organizations, trade associations, private businesses, and federal agencies. Appropriated media campaign funds were to be used to cover the costs of actually making the advertisements as well as the costs for planning of purchase of media time and space. The campaign also included public outreach and specialized communications efforts. The purpose of public outreach and communications was to extend the reach and influence of the campaign through nonadvertising forms of marketing communications. Examples of these nonadvertising forms of communication included submitting articles related to key campaign messages such as effective parenting or the effects of marijuana on teen health to newspapers and magazines; building partnerships and alliances, for example, coordinating positive activities for teens with local schools and community groups; creating Web sites and exploring alternative media approaches; and entertainment industry outreach. According to the campaign’s communications strategy, youth aged 9 to 18 were segmented into three school and age risk-level categories: late elementary school adolescents, aged 9 to 11; middle school children, aged 11 to 13; high school youth, ages 14 to 18. The campaign originally targeted youth aged 9 to 18 with a focus on middle school age adolescents (roughly 11-to 13-year-olds); its secondary focus was on high school-aged youth (approximately 14 to 18 years of age). In 2001, the campaign shifted its creative focus to 11- to 14-year-olds in order to more effectively reach youth at the time they are most at risk for trying drugs. In 2002, the campaign altered its target age group to focus primarily on 14- to 16-year-olds. For all age groups, the communications strategy identified the primary focus of the campaign as at-risk nonusers and occasional users of drugs. For all groups, it was designed to give consideration to differences arising from gender, race, ethnicity, and regional and population density factors. From fiscal year 1998 through fiscal year 2004, Congress appropriated $1.225 billion to support the campaign (table 1). For fiscal year 2007, the President’s budget requested $120 million for campaign activities. The 2007 request represents an increase of $21 million above the fiscal year 2006 budget authority. The additional resources were requested to help to purchase additional media time and space to increase the reach and frequency of the campaign’s messages. Planning and the Underlying Logic of the Campaign According to ONDCP, its planning for the campaign’s communications strategy included reviews of published studies on the etiology and prevention of adolescent drug use, drug prevention campaigns, other public health campaigns, and general consumer marketing campaigns targeting youth and their parents. ONDCP also supplemented its research evidence with an extensive expert consultation process that included input from over 200 experts in academia, civic and community organizations, government agencies, and the private sector. A campaign design expert panel that included experts in the fields of drug use and prevention, public health communication, advertising, market research, consumer marketing, and public policy met over a 4-day period during the fall of 1997 and played a key role in integrating diverse sources of information and guiding the development of the communications strategy for the campaign. The planning process resulted in a statement of ONDCP’s communications strategy for the campaign, which described the premises of the campaign. Among these were the following: First, that the media can influence people in a variety of ways, such as informing and alerting them to important developments and shaping subsequent actions; satisfying leisure time needs, thereby influencing individuals’ views and beliefs about the world; and stimulating interest in commercial goods and services, thereby influencing where and how people shop. Second, that media messages have more potential to reinforce rather than to alter existing attitudes and beliefs. Third, to the extent that youth attitudes, beliefs, and intentions toward drug use vary with their age, the potential of a media campaign to influence drug use may be directly related to the age of the youth. Fourth, the campaign had to be sustained over time and to have a significant media presence, and its central messages have to be repeated often and in a variety of ways. Citing research showing that attitudinal and behavioral change took time to occur, ONDCP reported that it expected to observe “improvements in anti-drug attitudes that would lead to decreases in youth drug use within three years” of the implementation of phase III of the campaign. Fifth, as parents and adult caregivers play a vital role in youth drug use behaviors, and by also targeting parents, the campaign would aim to affect the nature of their interaction with their children, thereby strengthening their children’s capacity to resist using illicit drugs. The campaign focused on primary prevention—that is, preventing those who did not use drugs from starting to use drugs. According to ONDCP, a media campaign that focused on primary prevention targets the underlying causes of drug use and therefore has the greatest potential to reduce the scope of the problem over the long term. Further, a primary prevention campaign also has greater potential to affirm and reinforce anti-drug attitudes of nonusers than to persuade experienced users to change their behaviors, and a primary prevention campaign would also, over time, lessen the need for drug treatment services. With a focus on young, non- drug-using adolescents, an expectation underlying the campaign’s potential success was that as these young, non-drug-using adolescents aged, the campaign’s messages would intervene, retard the development of more pro-drug attitudes, and enable adolescents to continue to maintain their preexisting anti-drug attitudes. By maintaining these attitudes, or preventing the development of pro-drug sentiments, the campaign would affect drug use rates by lowering the rate at which youth initiated drug use, particularly the use of marijuana or inhalants. The campaign was designed to have a significant and sustained media presence. During planning, ONDCP acknowledged that the campaign would have to be sustained for a period of time sufficient to bring about a measurable change in the beliefs and behaviors of youth in the target audience. On the basis of the experiences of successful social marketing campaigns, ONDCP reported that it expected that changes in awareness or recall of the campaign would be detectable within a few months of the start of the campaign, that changes in perceptions and attitudes would be detectable within 1 to 2 years of the start of the campaign, and that changes in behavior would be detectable within 2 to 3 years. Campaign Activities during Phase III From mid-1999, the start of phase III, through June 2004, the end of the phase III evaluation, campaign activities included extensive media dissemination of campaign messages to a national audience of youth and parents; an interactive media component, which involved using content- based Web sites and Internet advertising; use of experienced individuals and organizations with expertise in marketing to teens, advertising and communications, behavior change, and drug prevention to inform the campaign strategy and implementation; use of multicultural initiatives that focused on sufficiently exposing campaign messages to African Americans, Asian Americans, Pacific Islanders, Hispanic Americans, American Indians, and Alaskan Natives; and the implementation of the integrated social marketing and public health communications campaign through the creation of partnerships with civic, professional, and community groups and outreach to media, entertainment, and sports industries. Through the partner organizations, the campaign attempted to strengthen local anti-drug efforts, and through outreach, it encouraged the news media to run articles that conveyed campaign messages. Youth and parent exposure to campaign messages could come from the direct, paid and donated advertising or from content delivered by news media and entertainment industries through the outreach efforts. Additional opportunities for exposure to anti-drug messages could be enhanced through personal involvement with organizations that became partners as a result of campaign outreach or by interaction with the campaign’s Web site. Further, youth exposure to anti-drug messages could also occur through interactions with friends, peers, parents, or other adults that occurred directly from either campaign ads or outreach efforts. Campaign Themes and Messages Campaign messages for both youth and their parents and caregivers were to focus on common transitions—such as the transition from elementary to secondary school—and common situations—such as the amount of time spent in settings without adult supervision—that were believed to heighten adolescents’ vulnerability to drug use initiation. In addition, messages were to focus on altering mediating variables—such as beliefs and intentions—that were known to have a significant impact on adolescent drug use. Finally, campaign messages were designed to create a “brand identity” in the minds of target audience members and through brand identity position campaign messages as credible and important. Throughout phase III, themes such as parents as “The Anti-Drug” and the “My Anti-Drug” theme for youth were designed to promote identification and positive associations with the campaign’s messages. While they evolved throughout the campaign, the central strategic messages or themes for youth focused on resistance skills and self-efficacy to refuse drugs, normative education and positive messages, negative consequences of drug use, and early intervention. Resistance skills and self-efficacy advertisements were designed to enhance the personal and social skills of youth that promote lifestyle choices and to help build youth’s confidence that they could resist drugs. Normative education themes attempted to instill the beliefs that most young people do not use drugs or convey messages that “cool people don’t use drugs,” while positive message themes reinforced the idea of positive uses of time as alternatives to illicit drugs. Negative consequences themes aimed to enhance youth perceptions that drug use is likely to lead to a variety of negatively valued consequences, such as loss of parental approval, reduced performance in school, and negative social, aspirational, and health effects. Negative consequences themes were the primary focus of the Marijuana Initiative, which was introduced during 2002. An early intervention theme sought to motivate youth to intervene with friends who they perceived as having problems with drugs or alcohol and tried to convince youth of their ability to take action and to give them the tools and skills they needed to intervene. For parents, the campaign’s themes included messages that every child, including their own, was at risk of doing drugs; that they can learn parenting skills to help them help their children avoid drugs; that they need to be aware of the harmful effects of drugs including marijuana and inhalants; and, as part of the Early Intervention Initiative, that it was important that they intervene at the earliest possible opportunity in their child’s life if their child was using drugs or alcohol. Design of the Evaluation, Interim Evaluation Reports, and Redirection of the Campaign ONDCP recognized the need for a separate evaluation of the campaign and for ongoing reporting of evaluation results. The need for a separate evaluation stemmed in part from the limitations of existing national surveys that monitor drug use, such as Monitoring the Future, which provides data on drug use by high school students, the National Household Survey on Drug Abuse, and the Youth Risk Behavior Survey, which addresses health risk behaviors including drug use. These recurring surveys provide very little information with which to evaluate the impact of the campaign, because they were not designed to evaluate it. As ONDCP has written, these surveys contain no questions about target audience exposure and response to the campaign, and as a result, any changes in attitudes, beliefs, and behaviors toward drug use could not be associated directly with the campaign. By comparison, ONDCP acknowledged that it was using the Westat evaluation to assess the extent to which changes in anti-drug attitudes and beliefs or drug-using behavior can be attributed to the campaign, as opposed to other socioeconomic factors. In addition, ONDCP indicated that for the campaign, data from Westat’s evaluation would enable ONDCP to assess whether the campaign is working. The primary tool of the Westat evaluation was the National Survey of Parents and Youth. The NSPY is a longitudinal panel study of children and their parents’ exposure and response to the campaign. The NSPY was designed to collect initial and follow-up data from nationally representative samples of youth aged 9 to 18 and from the parents of these youth. The sample was designed to represent youth living in homes in the United States and their parents. Data collection began in November 1999 and was conducted over four rounds—each of which was about 1 year apart from the next round—in nine waves of interviews. An interview wave refers to the fielding of a survey round to a specific subsample in the NSPY. An interview round refers to the completion of interviews with the entire sample. Data for each of the nine waves were collected using a laptop computer and a combination of computer-assisted interview technologies. To collect sensitive data, audio computer-assisted self- interview technology was used, allowing respondents to self-administer the questionnaire in total privacy. The final wave of data collection was completed in June 2004 (fig. 1). Eligible youth and parents were to be interviewed four times. The evaluation aimed to assess whether exposure to the campaign affected the self-reported knowledge, attitudes, beliefs, and drug use of youth. Because the campaign reached out to all youth nationwide, the evaluators could not assess its effects using experimental methods, in which some subjects are randomly assigned to the intervention and others are randomly assigned to control groups that were not exposed to the intervention. Westat’s evaluation was designed to take into account the variation in self-reported exposure to the campaign messages and to assess how this variation in exposure was correlated with outcomes that the campaign intended to affect. To attribute changes in drug use attitudes and behaviors to the campaign, the evaluation was designed to assess exposure to the campaign and to compare differences in outcomes for groups of persons that were exposed to varying levels of the campaign’s messages, and to use statistical controls to account for individual-level differences among survey respondents. Westat’s evaluation assessed youth self-reported drug use and intermediate outcomes—such as youth and parent attitudes and beliefs toward drug use and parental involvement with their children—that were believed to influence youth drug use. The evaluation of phase III addressed issues related to (1) whether the campaign was reaching its target populations, (2) whether the desired outcomes moved in favorable or unfavorable directions, (3) whether the campaign was influencing changes in the desired outcomes, and (4) what could be learned from the overall evaluation to support ongoing decision making for the campaign. These issues led to the five major objectives for the evaluation: to measure changes in drug-related knowledge, attitudes, beliefs, and behavior in youth and their parents; to assess the relationship between changes in drug-related knowledge, attitudes, beliefs, and behavior and self-reported measures of media exposure, including the salience of the measures; to assess the association between parents’ drug-related knowledge, attitudes, beliefs, and behavior and those of their children; to assess changes in the association between parents’ drug-related knowledge, attitudes, beliefs, and behavior and those of their children that may be related to the campaign; and to compare groups of people with high exposure to other groups with low exposure. Westat submitted semiannual and special topic reports to NIDA, as the findings from these interim evaluation reports were to be used to support ongoing decision making for the campaign. Westat submitted the first semiannual report in November 2000. By December 2003, Westat had submitted six additional reports, four of which were labeled as semiannual reports, and the other two included a special report on historical trends in drug use and a 2003 report of findings. Westat submitted its first draft of its final report to NIDA in February 2005. In addition to Westat’s evaluation of the relationship between exposure and outcomes, Westat also prepared a report on the environmental context of the campaign. In May 2002, Westat reported findings from this qualitative study of views of representatives from major national organizations and state prevention coordinators about the messages conveyed by the campaign and the role of the campaign as an organizing partner in helping to bolster local substance abuse prevention efforts. According to Westat, representatives felt that the campaign’s messages reinforced their own messages that encouraged youth to find healthy alternatives to drug use and to raise public awareness of the issue of illicit drugs among youth. Westat also reported that representatives were less enthusiastic about the role of the campaign as an organizational partner in helping with local substance abuse prevention efforts. “There is little evidence of direct favorable Campaign effects on youth. There is no statistically significant decline in marijuana use to date, and some evidence for an increase in use from 2000 to 2001. Nor are there improvements in beliefs and attitudes about marijuana use between 2000 and the first half of 2002. Contrarily, there are some unfavorable trends in youth anti-marijuana beliefs. Also there is no tendency for those reporting more exposure to Campaign messages to hold more desirable beliefs.” Westat further reported that there were unfavorable delayed effects of campaign exposure on subsequent intentions to use marijuana and on other beliefs. By delayed effects, Westat referred to the relationship between exposure to the campaign measured in one survey round having an effect on intentions or beliefs outcomes at a subsequent survey round. For parents, Westat reported that the evidence was consistent with favorable campaign effects, as it found that there were favorable changes for three of five parents’ belief and behavior outcome measures. However, Westat also reported that it found no evidence for favorable indirect effects on youth behavior as the result of their parents’ exposure to the campaign. Congressional appropriators expressed concerns about the findings of Westat’s fifth semiannual report. In the conference report for fiscal year 2003 omnibus appropriations, the conferees reported that they were “deeply disturbed by the lack of evidence that the National Youth Anti- Drug Media Campaign has had any appreciable impact on youth drug use.” The conferees further acknowledged that while the evaluation conducted under NIDA’s auspices showed “slight and sporadic impact on the attitudes of parents, it has had no significant impact on youth behavior.” The conferees further acknowledged that while other surveys of youth drug use—such as Monitoring the Future, a survey of high school youth—showed recent declines in drug use, “the NIDA study was undertaken to measure the specific impact of the Media Campaign, not simply to gauge general trends,” and the conferees stated that they “intend to rely on the scientifically rigorous NIDA study to gauge the ultimate impact of the campaign” and to reevaluate the use of taxpayer money to support the campaign if the campaign continued to fail to demonstrate its effectiveness. In 2002, the strategy for the campaign was redirected. In the spring, the target age group of the campaign became 14- to 16-year-olds—youth who have higher rates of marijuana initiation than younger youth—from its original targeting of 11- to 13-year-olds. The shift to teens in the 14- to 16-year-old range aimed to allow the campaign to more effectively reach youth during the time at which they are most at risk for trying drugs. ONDCP also required more rigorous copy test procedures of all television advertisements before they were aired, and ONDCP increased its oversight in guiding the development and production of advertisements. In October 2002, ONDCP launched a new initiative called the Marijuana Initiative. This initiative contained more focused advertising to address youth marijuana use. In a hearing before the House Committee on Government Reform, ONDCP announced that it would reverse the ratio of campaign advertising expenditures directed to adults and youth, respectively. Previously, about 60 percent of expenditures were directed to adults and 40 percent toward youth. Finally, during February 2004, it expanded the campaign’s communications goals to include the Early Intervention Initiative. This intervention was targeted toward both parents and teen friends, and ONDCP intended to use parental and peer pressure to stop drug and alcohol use among teens. Assessment of the Campaign by the Office of Management and Budget and ONDCP’s Current Approach To strengthen the linkages between resources and performance envisioned in the Government Performance and Results Act of 1993 (GPRA), the Office of Management and Budget (OMB) developed the Program Assessment Rating Tool (PART) to bring performance information into the executive budget formulation process. PART is designed to determine the strengths and weaknesses of federal programs by drawing upon available program performance and evaluation data so that the federal government can achieve better results. The PART therefore looks at factors that affect and reflect program performance, including program purpose and design; performance measurement, evaluations, and strategic planning; program management; and program results. Because the PART includes a consistent series of analytical questions, it allows programs to show improvements over time and allows comparisons between similar programs. OMB’s PART rating of the campaign addressed issues related to its purpose and design, strategic planning, program management, and program results and accountability. OMB indicated that the purpose was clear—giving ONDCP a 100 percent score on this factor—and it rated the campaign’s planning and management with scores of 67 percent and 70 percent, respectively. In its assessment of ONDCP’s strategic planning, OMB noted that in response to its 2002 PART review, ONDCP revised the campaign’s logic model and significantly changed its long-term and annual performance measures. However, OMB’s assessment rating for the campaign was “results not demonstrated.” OMB indicated that its assessment of the campaign’s progress toward both the long-term goals and annual performance goals will be reviewed against the results of the NIDA-managed evaluation. OMB noted that while there is no federal program closely comparable to the campaign, evaluations of other health behavior change efforts found short- term effects after exposure to media. While acknowledging that a final assessment of the effects of the campaign awaited the final report from the NIDA-managed evaluation, OMB also indicated that “outcome data from the evaluation suggest little or no direct positive effect on youth behavior and attitudes attributable to the campaign to date. Perhaps some positive effect on parental attitudes/behavior but that has not yet translated into an effect on youth.” ONDCP has credited the campaign, along with a variety of collective prevention efforts, with contributing to “significant success in reducing teen drug use, as evidenced by the 19 percent decline from 2001 to 2005.” It has introduced a new youth brand approach to connect youth with aspiration themes. ONDCP also has indicated that while it awaits our formal assessment of the evaluation, that it will use existing national surveys to evaluate the campaign and suspend its request for proposals for a new evaluation contract. Specifically, ONDCP indicated that it would use the MTF survey to track improvements in perception of the risk of drug use—a predictor of lower drug use by youth—and it would use a special analysis of the PATS survey—the Partnership for a Drug-Free America’s Attitude Tracking Survey—data on anti-drug messages. According to the 2005 data from MTF, there were no significant 1-year declines in marijuana use for youth in any grade levels, and while gradual declines in the upper grades continued, declines halted for youth in the 8th grade. Additionally, for 8th graders, perceived risk of marijuana use held steady, while for youth in 10th and 12th grade, there was an increase in perceived risk of marijuana use. Recent Research on the Effects of the Campaign in Local Settings Two recently released studies have reported that exposure to the campaign was associated with changes in past-month marijuana use under certain conditions for certain groups of students exposed to the campaign. In one of the studies, 45 South Dakota high schools and their middle- school feeder(s) were randomly assigned to three groups: (1) a basic prevention curriculum, (2) a group given this curriculum with booster lessons, and (3) a control group. All schools were exposed to the campaign during the fall of 1999 and spring of 2000. This permitted the researchers to test for a synergistic effect between exposure to the campaign’s anti-drug messages and participation in the school-based drug prevention curriculum. The sample of about 4,100 youth were asked how often they had seen anti-drug advertisements in recent months in five media outlets that were used by the campaign, and the researchers measured exposure to the campaign that indicated whether or not the adolescents reported seeing ads at least one to three times per week in any of the five media outlets. Consistent with Westat’s fifth interim report, the evaluation of the South Dakota drug prevention curriculum found no direct effects of exposure to the campaign on its sample of adolescents’ use of illegal drugs. However, the evaluation also found that marijuana use in the past month was significantly less likely among adolescents who received both the curriculum with booster lessons and weekly exposure to the campaign’s messages. In other words, neither the enhanced curriculum nor the campaign alone had a substantial effect on marijuana use in the absence of the other. In addition, this evaluation’s measure of exposure was based on weekly exposure, suggesting that the synergistic effect of the campaign observed in these South Dakota schools was based on the delivery of repeated messages. The second study used monthly random samples of 100 youth from the enrollment lists of 4th to 8th graders in the public schools in the spring of 1999 in two moderate-sized communities—Fayette County (Lexington), Kentucky, and Knox County (Knoxville), Tennessee—over 48 months from April 1, 1999, through March 31, 2003. The study period included advertisements aired under the campaign’s Marijuana Initiative. Students in the samples aged over time and were 13 to 17 years of age at the beginning of the Marijuana Initiative. Youth in the samples were measured on marijuana use during the past 30 days, as well as on their attitudes toward marijuana. Exposure to television and radio advertisements was measured by self-reported past-month exposure. The study found that among high-sensation-seeking youth—that is, youth who desire novel, complex, and intense sensations and experiences and who are willing to take social risks to obtain them—exposure to the first 6 months of the campaign’s Marijuana Initiative led to reductions in marijuana use. The study’s authors reasoned that the effects that they found for the Marijuana Initiative were consistent with an approach termed SENTAR (for sensation-seeking targeting), in which high-sensation-seeking youth are targeted with high sensation value messages to prevent risky behaviors. Westat’s Evaluation Design, Use of Generally Accepted and Appropriate Sampling and Analytic Techniques, and Reliable Methods for Measuring Campaign Exposure Produced Credible Evidence to Support Its Findings Westat was able to produce credible evidence to support its findings about the relationship between exposure to campaign advertisements and both drug use and intermediate outcomes by employing a longitudinal panel design—i.e., collecting multiple observations on the same persons over time—using generally accepted and appropriate sampling and analytic techniques and establishing reliable and sufficiently powerful measures of campaign exposure. Westat encountered various challenges and threats to validity that are commonly associated with large-scale longitudinal studies, including lack of an opportunity to use experimental methods, lack of baseline data, and changes in campaign focus that were not timed with data collection; issues with ensuring adequate sample coverage and controlling for sample attrition over time; establishing measures that were sufficient to detect and reliably measure campaign effects; and disentangling causal effects without being able to employ an experimental design where subjects would have been randomly assigned to different levels of exposure. Our review of Westat’s evaluation report and associated documentation leads us to conclude that the design and methodology used in its evaluation responded appropriately to these challenges, resulting in credible findings. Although Elements of the Campaign Limited Choices of Evaluation Designs and Affected Data Collection, Westat’s Design Was Rigorous and Provided a Means to Test for Campaign Effects The nationwide scope of the campaign precluded Westat from using experimental methods or obtaining baseline data, and the timing of the introduction of some new campaign initiatives limited some of the data available to evaluate them. However, Westat’s longitudinal panel survey design provided a framework for developing strong evidence of within- respondent changes in outcomes over time as a result of exposure to the campaign. The consensus of a scientific panel convened by NIDA in August 2002 to review the evaluation was that Westat’s use of a national probability sample to study change arising from the campaign was preferable as the “gold standard” to a study based on other alternatives, such as in-depth community-based studies of the mechanisms of change and campaign effects. Additionally, the theoretical underpinnings of behavioral change through advertising, along with statistically significant outcomes in some but not all groups, suggest that the absence of baseline data and introduction of new campaign initiatives did not invalidate the evaluation’s findings. Finally, despite the introduction of new campaign initiatives that were not timed with data collection cycles, Westat was able to assess change in the NSPY data and generate statistically significant findings using these data. Westat’s longitudinal panel design was based on the premise that effects of exposure to the campaign on outcomes could be measured and detected within individuals over time, after controlling for various other factors that could have influenced outcomes. The design called for measuring the same respondents up to four times to assess how the natural variation in exposure to the campaign correlated with campaign outcomes. Westat’s approach—an exposure (or dose)-response model—is based upon a premise that respondents’ recall of advertisements (exposure or dose) is related to outcomes (response). In two recent studies of the effects of the campaign on specific groups of youth in local areas, an exposure-response approach has been shown to be an effective method for detecting effects of the campaign in reducing youth drug use in local settings. One of the studies reported a synergistic effect of exposure to the campaign and a classroom-based drug prevention curriculum among 9th grade students in 45 South Dakota high schools. The other study reported reductions in drug use during the period of the redirected campaign among high-sensation- seeking youth in schools in Knoxville, Tennessee, and Lexington, Kentucky. To assess the possibility of preexisting differences between groups of exposed youth and parents that might explain both the variation in exposure to the campaign and variation in outcomes, Westat included in the NSPY structured interview many questions on personal and family history, and it used the responses to control statistically for differences in attributes of respondents in order to attempt to isolate the relationship between exposure to the campaign and outcomes. The absence of baseline data—that is, precampaign data on outcomes— was beyond Westat’s control, as phase III of the campaign began before the first wave of data collection for the phase III evaluation began. The lag between the start of phase III of the campaign in mid-1999 and the completion of the evaluation’s first round of data collection—around mid- 2001—leaves open the possibility that there were effects of the campaign that occurred very early on in the campaign, prior to when Westat began data collection. Several factors suggest that the absence of pre-phase III baseline data was not fatal to the evaluation’s findings. First, if there were effects of the campaign that could not be detected because of the absence pre-phase III baseline data, those effects must have occurred very rapidly and then endured throughout the remainder of the campaign, from 1999 through 2004. However, rapid changes in youth drug use were not observed in MTF data; rather, the overall trend in MTF past year drug use was flat between 1998 and 1999. Second, rapidly occurring effects were not expected by ONDCP in designing the campaign. As we reported in 2000, and as ONDCP wrote in 2001, ONDCP believed that it would take 2 to 3 years for changes in drug use to be evident as a result of the campaign. Another campaign design factor that affected Westat’s evaluation was the implementation of new campaign initiatives, such as the Marijuana Initiative, which were implemented at times that officials at ONDCP considered to be important, and therefore they may not have coincided with planned data collection for the evaluation, nor should they necessarily have done so. For example, the Marijuana Initiative was implemented in October 2002, and the NSPY data available to evaluate outcomes during it were limited to three complete survey waves. For its longitudinal analysis of change during the Marijuana Initiative, Westat was limited to data from two survey waves. Despite these limitations, the evaluation produced data that enabled Westat to detect effects during the period of the Marijuana Initiative. Sample Coverage Issues Did Not Invalidate Westat’s Assessment of the Effectiveness of Exposure to the Campaign on Intermediate and Drug Use Outcomes During the enrollment phase of the NSPY, Westat experienced sample coverage problems, in that it enrolled—or rostered—a smaller percentage of households with youth in the targeted age range than would be expected based on comparable Current Population Survey (CPS) estimates—the data that Westat used to develop its expectations about the percentage of households having youth in the targeted age ranges. Coverage refers to the extent to which a sample is representative of the intended population on specified characteristics, and it is important because the omission of segments of the intended population from a sample—or undercoverage—can lead to biased results, in that omitted segments may differ in some important respect from those segments included. Westat estimated the extent of undercoverage in the NSPY to be about 30 percent as compared to the CPS estimates, and according to Westat and NIDA, the undercoverage arose during the stage of sampling in which Westat was developing rosters of households that were believed to contain youth in the target age range. At this stage, the survey rostering process required entry into the household, which may have led respondents in potentially eligible households to refuse to participate. Our review of Westat’s documentation leads us to conclude that there was no evidence of biased results due to undercoverage and that the sample was sufficiently reliable both for the purposes of estimating changes over time in individual outcomes and for assessing the effectiveness of exposure to the campaign on outcomes. Westat’s comparisons of the estimated population characteristics of the NSPY—such as race and ethnicity of head of household and race and ethnicity of youth in households—with the estimated population characteristics from the CPS show that they are generally similar. That is, the distributions of characteristics of eligible households with youth included in the NSPY were broadly consistent with a variety of corresponding distributions from the 1999 CPS. These comparisons suggest that the NSPY estimated population by race and ethnicity was similar to that of CPS. Westat also used multivariate modeling techniques to develop weighting adjustments, and it developed weights to adjust its sample for nonresponse that were reasonably effective in reducing nonresponse bias. An additional test for bias in a sample is to compare estimates derived from it with estimates on the same variable derived from another sample. If the NSPY results were biased, then one would expect that estimates derived from it would differ from estimates derived from unbiased samples. For example, if eligible households refused to participate in the NSPY because they contained teens with drug issues and as a result avoided participation at a higher rate than did households containing teens without drug issues, then these higher refusal rates by households containing teens with drug issues would lead to NSPY estimates of the percentage of youth reporting that they used drugs that were lower than those obtained from other, comparable national surveys. According to data provided by NIDA officials and our review of Westat’s final report, estimated self-reported drug use rates from the NSPY are comparable to estimates derived from other major surveys of drug use, such as the National Survey on Drug Use and Health. For example, in the NSPY, rates of past-month marijuana use among 12- to 18-year-olds were 7.2 percent in 2000, 8 percent in 2001, 8.9 percent in 2002, and 7.9 percent in 2003. These rates were similar to those reported for 12- to 17-year-olds in the National Survey on Drug Use and Health (NSDUH) of 7.2 percent in 2000, 8 percent in 2001, 8.2 percent in 2002, and 7.9 percent in 2003. If youth with known drug use problems consistently opted out of both the NSPY and the NSDUH—a hypothesis that is not testable with the available data—then the estimates from both the NSPY and the NSDUH of the true prevalence of youth drug use would be biased underestimates. Sample Attrition across NSPY Interview Rounds Was Sufficiently Low to Allow for Reliable Assessments of the Effect of Campaign Exposure on Outcomes As the NSPY was a longitudinal survey—in which eligible sample respondents were re-interviewed up to three times after their enrollment interviews—attrition was a concern with which Westat had to contend. If comparatively large numbers of sample respondents were not retained across successive rounds of the survey, the capacity of the NSPY to provide data to assess changes in outcomes in response to exposure over time would be greatly diminished. Further, if attrition was specific to certain groups, then the NSPY estimates would also be biased. For the purpose of estimating within-respondent changes in outcomes in response to changes in exposure across sample periods—the main use of the NSPY data—Westat achieved follow-up longitudinal response rates of between 82 percent and 94 percent for waves 4 through 9, the follow-up waves to the first three enrollment waves. The longitudinal response rate consists of two elements: (1) the percentage of prior survey respondents that are tracked and for whom eligibility is determined and (2) the percentage of those eligible that actually complete an interview. Across the three follow-up survey rounds, Westat tracked and determined the eligibility to participate in a follow-up survey of between 92 percent and 96 percent of the youth and parents who completed a survey in the prior round. Of these, Westat obtained consent and completed extended interviews with between 94 percent and 96 percent of youth and parents for whom eligibility for a follow-up survey had been determined. In our view, Westat’s follow-up response rates resulted in a sample that was sufficient to provide reliable findings about the effects of exposure on outcomes. In addition, Westat’s nonresponse adjustment methodology compensated for effects of differential response rates related to the percentage of persons in certain age groups, of certain races and ethnicities, of those that owned homes versus rented, those that were U.S. citizens versus noncitizens, and those with incomes below the poverty level. The NSPY Data Could Be Used to Detect Reasonably Small Effects, and Westat’s Measurement of Exposure and Outcomes Were Valid and Could Detect Effects, if They Occurred The NSPY sample could be used to detect changes in outcomes that were on the order of magnitude of changes expected by ONDCP for the campaign, and its measures of exposure were valid and reliably predicted outcomes. In early meetings on the design of the evaluation of the media campaign, ONDCP officials reported that it had a specific Performance Measures of Effectiveness system and that the campaign was embodied within the first goal of the National Drug Strategy, which was to “educate and enable America’s youth to reject illegal drugs as well as the use of alcohol and tobacco.” Under this goal, ONDCP’s proposed targets for reducing the prevalence of past-month use of illicit drugs and alcohol among youth from a 1996 base year—by 2002, reduce this prevalence by 20 percent, and by 2007, reduce it by 50 percent. ONDCP officials identified other specific targets, again from the base year 1996—by 2002, increase to 80 percent the proportion of youth who perceive that regular use of illicit drugs, alcohol, and tobacco is harmful; and by 2002, increase to 95 percent the proportion of youth who disapprove of illicit drug, alcohol, and tobacco use. To achieve a goal of 80 percent of 12th grade youth who perceive that regular use of marijuana is harmful would require increasing the 1996 baseline percentage of youth perceiving marijuana as harmful from 60 percent, as measured by MTF, or by about 3.3 percentage points per year from 1996 to 2002. Westat’s sample could be used to detect this amount of annual change in youth attitudes. In order to detect changes in outcomes due to exposure to the campaign, it also was necessary that Westat accurately measure and characterize exposure to the campaign. Westat provided evidence for the validity of its measures of self-reported exposure, and the evidence suggests that the measure of exposure was both valid and reliable. To measure exposure to the campaign for both youth and parents, NSPY interviewers asked respondents about their recall of anti-drug advertisements (general exposure) and their recognition of specific current or very recent television and radio advertisements (specific exposure). To facilitate measures of recall, respondents viewed television and radio advertisements on laptop computers. Youth and parents were only shown or listened only to advertisements targeted to their respective groups. In addition, both youth and parents were asked some general questions about their recall of advertisements seen or heard in various media, including television, radio, newspapers, magazines, movie theaters, billboards, and the Internet. Westat’s assessments of the validity of its measure of exposure to campaign advertisements confirm that the NSPY data were able to measure exposure. First, Westat examined respondents’ recall of campaign advertisements using “ringer” television advertisements— advertisements that never had appeared. According to Westat’s analysis of ringer advertisements, youth were more likely to recognize an advertisement as a campaign advertisement when presented with an actual campaign advertisement than a bogus one. For example, a far lower percentage of respondents (11 percent) claimed to have seen a ringer, or bogus, advertisement than the percentage who claimed to have seen the broadcast advertisements (45 percent), particularly the advertisements that were delivered with high frequency. The result held for youth and for parents. Second, comparing data on advertisement time purchases with self- reported exposure to these advertisements in the NSPY, Westat found a high correlation between advertising and exposure. Specifically, on the basis of analysis of individual advertisements’ gross rating points (GRP)— a measure of the underlying reach and frequency of each advertisement— and self-reported exposures by respondents, Westat found a high correlation between GRPs purchased by the campaign and self-reported exposure to advertisements among youth. The correlation for parents was somewhat smaller, but was also significant. Third, Westat also compared self-reported exposure with recall of the correct brand phrase and found a strong association between self-reported exposure and correct recognition of the brand phrase. This is further evidence for the validity of its measures of self-reported exposure. Westat measured a variety of outcomes for youth and parents and took steps to ensure that the measures were consistent with existing research. The youth questionnaires included numerous questions that were designed to measure exposure to the campaign advertisements and other anti-drug messages. The youth question areas included exposure propensity to media; current and past use of tobacco, alcohol, marijuana, inhalants, and Ecstasy; past discussions with and communication of anti-drug messages from parents and friends; expectations of others about respondent’s drug use; knowledge and beliefs about the positive and negative consequences of drug use; exposure to campaign messages; family and peer factors; personal factors; and demographic information. Westat used separate questionnaires for youth of different ages; one questionnaire was used for children (aged 9 to 11) and another one was used for teens (aged 12 to 18). Westat’s Analytic Methods Aimed to Isolate Causal Effects of the Campaign and Did So Using Sophisticated Techniques That Enhanced the Strength of Its Findings In it analysis, Westat used three types of evidence to draw inferences about the effects of the campaign: (1) trend data—data that describe increases or decreases in drug use and other outcomes over time; (2) cross-section analysis—measures of association between exposure to campaign messages and individual drug use beliefs, intentions, and behaviors, at the time data were collected; and (3) longitudinal analysis— measures of association, for youth and parents who were observed at two points in time, between exposure to campaign messages at the earlier time on outcomes at the later time. Westat indicated that trends over time, by themselves, could not be used to provide definitive support for campaign effects. Rather, the trends needed to be supported by measures of association. Westat also indicated that measures of association, whether cross-sectional or longitudinal, needed to control for variables that could influence outcomes independently of the campaign or otherwise confound the association between exposure and outcomes. Cross-section association between exposure and outcomes measured at the same time would provide stronger evidence of campaign effects than would trend data alone, provided that controls for other variables were introduced into the associational analyses. However, even if cross-section associations between exposure and outcomes hold after controlling for the effects of other variables, as Westat pointed out, there may remain an alternative explanation for cross-section associations: For example, an outcome—like perceptions of others’ use of drugs—may be the cause of exposure rather than an effect of it. Westat’s longitudinal analysis attempts to address the ambiguities that exist with cross-sectional associations. With longitudinal data, if, after controlling for other confounding variables, exposure measured at an earlier time is associated with an outcome at a later time, the inference made is that the causal direction is from exposure to outcome, since an effect cannot precede a cause in time. As the campaign was implemented nationally and it was therefore not possible to assign youth and their parents randomly to treatment and control groups, a major threat to the validity of the conclusions from the evaluation is that the observed correlations between exposure to the campaign and self-reported attitudes and behaviors could reflect preexisting differences among individuals in their underlying susceptibility to campaign messages. The evaluation’s associations between exposure to the campaign and self-reported initiation of marijuana use took into account statistically the individual differences in attributes among youth who were exposed to various levels of campaign messages, and they adjusted for the influence of other variables that could determine marijuana initiation—called confounder variables. As such, Westat’s evaluation of the associations between campaign exposure and marijuana initiation have accounted for individual differences among youth and can be viewed as comparisons of outcomes for statistically similar individuals. Further, the statistical test Westat used in assessing the relationship between exposure and initiation did not rely upon assumptions of linearity between levels of exposure and initiation. Instead, it tests for an ordered relationship between exposure and an outcome such as marijuana use initiation. Westat used statistical methods to address the possibility that preexisting differences between individuals could have caused both reported levels of exposure and respondent outcomes, and its use of these methods contributed to the validity of its findings about the effects of the campaign on outcomes. If, independently of the campaign, individuals differed in their underlying tendencies to accept and recall campaign messages, and if the individuals who were more likely to recall advertisements also were those who were more likely to respond to advertisements, then, absent efforts to address this confounding factor, the findings about the evaluation would be questionable. This type of bias is often called a selection effect. If selection effects occurred in the campaign, then both exposure and reported changes in attitudes and behaviors could reflect underlying beliefs that were not affected by the campaign, despite the presence of statistical correlations between self-reported exposure and changes in attitudes and behaviors. To control for selection effects and the many factors that could have influenced both exposure and outcomes independently of, or in conjunction with, the campaign, Westat used propensity scoring methods. These methods limit the influence of preexisting differences among exposed groups by controlling for a wide range of possible confounding variables. Propensity score methods are used to create comparison groups that are similar on measured and potentially confounding variables but that differ on their levels of treatment. In the evaluation of the campaign, the comparison groups were similar on confounding variables but differed on their level of exposure to campaign messages. Propensity score methods replace a set of confounding variables with a single function of these variables, which is called the propensity score. In Westat’s analysis, an individual’s propensity score is considered to represent an individual’s probability of being assigned to a particular level of exposure to the campaign, conditional upon the individual’s values of the confounding variables. By including relevant, potentially confounding variables and matching individuals on their propensity scores, Westat was able to minimize bias due to selection effects. The comparison groups that Westat created by using propensity score methods can be considered as statistical analogues to randomly assigning individuals to different levels of exposure. After creating these groups, Westat then analyzed outcomes among the groups having different propensities to be exposed to campaign messages. Our assessment of Westat’s methods leads us to conclude that Westat took reasonable steps to develop valid propensity models, and as a result of its models, its analysis identified the effects of the campaign, net of other factors included in its propensity score models. First, rather than simply compare individuals who were exposed to campaign messages with those who were not exposed, Westat estimated and compared groups of individuals with different levels of exposure, where the number of exposure groups was measured alternatively as a three- or four-level variable—e.g., low, medium, or high exposure. Second, for the results of propensity methods to be valid, it is important that the propensity scoring models include all relevant variables that could otherwise explain differences in both exposure and outcomes, as evaluators can adjust only for confounding variables that are observed and measured. If an important variable is omitted from the propensity model, the results of analyses may be affected. Westat’s models included many relevant and potentially confounding variables. For example, in the youth models, the propensity score models included measures of demographic attributes, educational attainment and educational aspiration, family and parent background, parental consumption of television and other media, income and employment, reading habits, Internet usage, location of residence in an urban area, among other variables. Third, for propensity models to remove the effects of confounding variables from the association between exposure and response, it is necessary that the population means of the confounder variables not vary across exposure levels. If a confounder is successfully balanced, then it will have the same theoretical effect across all exposure levels. After estimating models, Westat also assessed and demonstrated the balance of variables in its propensity models. The Phase III Evaluation Provided Mixed Evidence of the Campaign’s Effectiveness on Intermediate Outcomes, but It Found No Effect of the Campaign on Parental Monitoring of Youth Westat reported mixed evidence about the effectiveness of the campaign on intermediate outcome measures—such as recall and identification of campaign messages, youth anti-drug attitudes, and parents’ beliefs and behaviors—that were thought to be causal factors influencing youth drug use, the ultimate target of the campaign. Most parents and youth recalled exposure to campaign anti-drug messages, and for both groups, recall increased during the September 1999 to June 2004 period covered by the phase III evaluation. For current, non-drug-using youth—whose resistance to initiating marijuana use the campaign intended to affect—although NSPY data showed some favorable trends in anti-drug attitudes and beliefs and in the proportion of youth who said that they would definitely not try marijuana, there was no evidence that exposure to the campaign influenced these trends. Conversely, among current, non-drug-using youth, evidence suggested that exposure to the campaign had unfavorable effects on their anti-drug norms and perceptions of other youths’ use of marijuana—that is, greater exposure to the campaign was associated with weaker anti-drug norms and increases in the perceptions that others use marijuana. On three of five parent belief and behavior outcome measures—including talking with children about drugs, doing fun activities with children, and beliefs about talking with children—the evidence pointed to a favorable campaign effect on parents. However, while there was mixed evidence on the effect of the campaign on parents’ beliefs and attitudes about monitoring children’s behaviors, there was no evidence to support a claim that the campaign actually affected parents’ monitoring behaviors—an area of the campaign’s focus for parents—and there was little evidence for favorable indirect effects on youth behavior or beliefs as the result of parental exposure to the campaign. Youth and Parents’ Recall of Campaign Advertisements Increased over Time, Their Impressions of the Advertisements Were Favorable, and They Could Identify the Campaign Brand According to Westat, the campaign purchased enough advertising time over the 58-month period from September 1999 to June 2004 to achieve an average exposure of 2.5 youth-targeted ads per week for youth and an average of 2.2 parent-targeted advertisements per week for parents. Westat’s estimates include campaign advertisements intended for either all youth or all parents, but they do not include exposure of youth to parent advertisements or parents to youth advertisements, nor do they account for separate advertising targeted to specific race- or ethnicity-defined audiences. Using exposure indexes, Westat measured trends in general and specific exposure to campaign advertisements. The general exposure index was based on questions that asked about exposure to anti-drug messages in recent months through a variety of channels, including movies, television, radio, and billboards, and was not limited to campaign advertisements. The specific exposure index was based on recall of specific advertisements broadcast during the 60 days prior to the respondent’s interview, and was limited to advertisements that targeted the respondent. For example, for youth, only youth advertisements were sampled to measure specific exposure. Youth aged 12½ to 18 and their parents reported increasing levels of recall of specific but not general exposure to campaign advertisements over time. For both parents and youth, there was a sharp increase over time in the recall of specific exposure of television ads across the campaign. Westat speculated that the increase in specific recall may have arisen from better-placed, more memorable, or longer- aired advertisements rather than only to an overall increase in television advertisements. However, recall of all general anti-drug advertising was fairly stable over time, as there was no overall detectable change in reported general exposure over the course of the campaign. Beginning in 2001, when the evaluation started to measure brand phrase recall, and continuing through 2004, the evidence indicates that youth, in particular, exhibited increases in brand phrase recall. Advertising campaigns may use a brand phrase to provide a recognizable element, and to the extent that the brand is recognized and positively regarded, its familiarity may lead to a positive response to a new advertisement or increase the perception that each advertisement is part of a larger campaign. The campaign included both a parent and a youth brand. Brand messages may have involved a series of phrases or the portrayal of an activity or lifestyle as positive (e.g., participating in team sports) to set up the brand phrase of “The Anti-Drug.” Westat reported that the evidence from the NSPY shows that the greater the exposure to media campaign advertising, the more likely respondents were to recall the brand phrase. In addition, the more that respondents recalled specific ads, the more likely they were to recognize the brand phrase, although over time even those with less exposure had learned the brand phrase. Overall, youth reported favorable impressions of the subset of campaign television advertisements that they were asked to evaluate, and their favorable impressions increased over time. Responses to the advertisements—whether they were attention getting, convincing, or said something important to the respondent—were positive among both youth and their parents. Parents’ evaluations of the advertisements were generally more positive than those of youth, and parents’ positive views also increased over time. In addition to distributing messages directly in media advertisements, the campaign aimed to reach its target audiences indirectly through other institutions and routes, such as community groups, in-school and out-of- school anti-drug education, and discussions among youth and parents, and youth and friends, concerning drug use and the drug advertisements. The NSPY data indicated that the campaign’s messages were not accompanied by similar increases in exposure to messages from other sources. Both youth and parents reported receiving anti-drug messages from other sources, but they did not consistently report increases in exposure to messages from these sources. For example, from the 2000 to 2004 samples, the percentages of youth reporting receiving in-school drug education messages and attending out-of-school drug education both declined. Westat Found That the Campaign Generally Had No Effect on the Attitudes of Youth Not Using Marijuana toward Its Use but That Exposure to the Campaign Was Associated with Unfavorable Effects on Youth Perceptions of Others’ Use of Marijuana Westat generally found no significant effects of campaign exposure on the cognitive outcomes of adolescent nonusers of marijuana— i.e., development of anti-drug attitudes and beliefs. For current nonusers, the evaluation reported on four cognitive measures and a fifth measure of their perceptions of others’ use of marijuana. Three of the four measures— attitudes and beliefs about the consequences of marijuana use; perceived social norms or pressures from parents, friends, and peers about infrequent or regular marijuana use; and perceived self-efficacy to avoid using marijuana, or their confidence to turn down use of marijuana under various circumstances—were premised to affect the fourth—youth intentions to use marijuana at all during the next year. The fifth outcome, perceptions of other youths’ use of marijuana, was included to examine whether exposure to the campaign was leading to increased perception among youth that others use marijuana, and whether this perception, in turn, affected their own behaviors. Westat reported that the evidence from the analysis of trend data from 2000 to 2004 for two of the youth cognitive measures—attitudes and beliefs about the consequences of marijuana use and intentions to use marijuana—showed significant increases in youth believing that marijuana use had negative consequences and significant increases in the percentage of youth that reported that they had no intention to use marijuana. However, evidence from both cross-section and longitudinal associations between exposure and these two cognitive outcomes did not substantiate that the favorable trends arose from exposure to the campaign. Specifically, the cross-sectional associations between both general and specific exposure to the campaign and intentions not to use marijuana show no significant favorable effects of exposure on this outcome. None of the cross-section associations between either general or specific exposure and intention to use marijuana are significant, and none of the longitudinal associations between specific exposure and intentions are significant. Two of the longitudinal associations between general exposure and intentions are significant, but the direction of the effect is unfavorable, in that greater exposure led to declines in intentions not to use marijuana. The evidence from the associational analyses between exposure and attitudes and beliefs about the consequences of marijuana use generally did not show an effect of the campaign. While there was one significant cross-section association between general exposure and attitudes and beliefs about consequences during the final two waves of survey data, there were no significant cross-section associations between specific exposure and attitudes and beliefs about consequences, nor were there any significant longitudinal associations with either general or specific exposure. The associational analysis also produced some evidence of unfavorable effects of exposure on social norms—i.e., social pressures from parents, peers, and other important persons about marijuana use. Westat’s cross- section associations showed no significant effects of exposure on social norms, but its longitudinal associations showed that across all survey rounds, there was a significant relationship between specific exposure and weaker social norms. Westat’s analysis of associations between exposure and perceptions of others’ use of marijuana also produced significant results. Cross-section associations between specific exposure and perceptions of others’ use were significant, as were longitudinal associations of this relationship. In other words, among youth who reportedly did not use marijuana at the time of their interview, there was a significant effect of specific exposure on the perception that others used marijuana, and the direction of the effect was unfavorable—that is, those reporting higher exposure to anti-drug ads were more likely to believe that their peers used marijuana regularly. A significant and unfavorable relationship between specific exposure and perceptions of others’ use of marijuana was obtained for the data covering the entire period of the evaluation as well as for the period of the redirected campaign, from 2002 to 2004. The Evaluation Reported Favorable Effects of the Campaign on Three Parent Outcomes but Not on Parental Monitoring A theme of the campaign was to encourage parents to engage with their children to protect them against the risk of drug use, and parent skills were a focus of parent advertising almost since the start of the campaign. The campaign encouraged parents to monitor their children’s behavior by knowing where they were and with whom, and to make sure that they had adult supervision. It also encouraged parents to talk with their children about drugs and to a lesser degree to engage in fun activities with their children. The evaluation observed five outcomes for parents, and for four of the five found significant and favorable effects of exposure to the campaign. For three outcomes—parent-child conversations about drugs (talking behavior), parents’ beliefs and attitudes about talking with their children about drugs (talking beliefs), and parents’ engagement with their children in in-home and out-of-home activities (fun activities)—both cross- section and longitudinal associations between exposure and outcomes were generally significant and favorable to the campaign. For parents’ beliefs and attitudes toward monitoring their children’s behaviors, Westat reported favorable trend and cross-sectional associations but no significant overall longitudinal effects of either general or specific exposure on this outcome. For the fifth outcome, parent monitoring behaviors—that is, parents’ knowing or having a pretty good idea about what their child was doing or planned to do—the evidence did not support a finding of an effect of the campaign. There were no significant favorable trends in parents’ reports of monitoring behaviors, and there were no significant cross-section or longitudinal associations of either general or specific exposure on monitoring behaviors. No Evidence of Favorable Effects of the Campaign on Youth Outcomes through Campaign Effects on Parental Outcomes Despite evidence of some favorable parental outcomes for the campaign, Westat found no significant evidence for the overall evaluation that these favorable parent outcomes affected youth attitudes and behaviors toward drug use. Specifically, for the entire period covered by the evaluation, Westat found no evidence of overall, indirect campaign effects on parents leading to changes in marijuana use, intentions to use marijuana, social norms, self-efficacy, or cognitions among youth who were not marijuana users. Westat found that there were some significant indirect effects of parental specific exposure on some youth outcomes for some subgroups. For example, parental specific exposure was favorably associated with intentions to use marijuana for 14- to 18-year-olds and for boys, and it was also associated favorably with attitudes and beliefs about the consequences of marijuana use for Hispanics. Westat also found significant but unfavorable indirect effects of parents’ general exposure on subgroups of youth in other youth outcomes. For example, parental general exposure was unfavorably associated with youth social norms for 14- to 16-year-olds and for girls. The Phase III Evaluation Found No Significant Effects of Exposure to the Campaign on Youth Drug Use Outcomes Other than Limited Unfavorable Effects on Marijuana Initiation Westat reported that the NSPY data showed some declines in self-reported lifetime and past-month use of marijuana by youth over the period from 2002 to 2004, and these trends in NSPY were consistent with trends in other national surveys of drug use over these years. Westat also reported that the NSPY data showed declining trends in youth reports of offers to use marijuana. However, Westat cautioned that because trends do not account for the relationship between campaign exposure and changes in self-reported drug use, drug use trends alone should not be taken as definitive evidence that the campaign was responsible for the declines. On the basis of the analysis of the relationship between exposure to campaign advertisements and youth self-reported drug use in the NSPY data— assessments that used statistical methods to adjust for individual differences and control for other factors that could explain changes in self- reported drug use—for the entire period covered by its evaluation, Westat found no significant effects of exposure to the campaign on initiation of marijuana by prior nonusing youth. The only significant effect indicated in Westat’s analysis of the relationship between campaign exposure and self- reported drug use was an unfavorable effect of exposure on marijuana initiation—that is a relationship between campaign exposure and higher rates of initiation—for one round of NSPY data and similar unfavorable effects of campaign exposure on marijuana initiation among certain subgroups of the sample (e.g., 12½- to 13-year-olds and girls). Westat found no effects of campaign exposure on rates of quitting or use by prior users of marijuana. Westat Tracked Trends in Marijuana Use from Several Sources and Reported That the Trend Data by Themselves Were Insufficient to Demonstrate Effects of Exposure to the Campaign Westat tracked trends in self-reported use of marijuana by youth and trends in youth reports of offers to use marijuana for the period from 2000 to the first half of 2004 to determine if there were significant declines. Westat also assessed these trend data for changes occurring since 2002, or during the period of the redirected campaign. Westat’s trend analysis was designed to provide supportive but not definitive evidence for campaign effects. In its trend analysis, Westat compared trends in self-reported drug use— lifetime, past year, and past month—in the NSPY with trend data on self- reported drug use from three other nationally representative surveys of drug use—Monitoring the Future, the Youth Risk Behavior Surveillance System (YRBSS), and the National Survey on Drug Use and Health. Both MTF and YRBSS are school-based surveys, and NSDUH is a household survey that provides estimates of drug use by the civilian, noninstitutionalized population of the United States aged 12 years and older. Methodological differences between the school-based surveys— MTF and YRBSS—and the household surveys—NSPY and NSDUH—have been shown to account for the some of the differences in estimates of marijuana use. According to Westat’s analysis, the surveys of self-reported marijuana use show some similarities and differences in trends depending upon the measure, age group, or subperiod covered within the longer 2000 to 2004 period. For example, the MTF data generally show declines in lifetime, past-year, and past-month self-reported drug use for 8th, 10th, and 12th graders over the years from 2000 to 2004, although only some of the year-to-year differences in the MTF self-reported drug use data were statistically significant. Nonetheless, for the subperiod from 2002 to 2004, MTF data show statistically significant declines in past-year and past- month use for 8th graders and past-year use for 10th graders, and the NSPY data also show statistically significant declines in past-month use from 2002 to 2004 for youth aged 12½ to 18 years old and for 14- to 18-year- olds. On the other hand, the MTF data suggest a decline in past-year and past-month use by 10th graders from 2000 to 2002, but the NSPY data suggest an increase in past-month marijuana use during this period. Further, the data from NSDUH for 2000 and 2001 also show statistically significant increases in lifetime, past-year, and past-month marijuana use among youth aged 12 to 17, statistically significant increases in lifetime and past-year marijuana use for youth aged 16 to 17, and a statistically significant increase in past year use for youth aged 14 to 15. The pattern of increase in NSDUH data from 2000 to 2001 is consistent with the 2000 to 2002 increases in past-month use in NSPY, but they differ from the MTF trends over this period. All four surveys generally show declines in marijuana use beginning in 2002, but not all of the declines are statistically significant. Both MTF and NSPY show some statistically significant declines since 2002, and while NSDUH and YRBSS show declines, the declines were not statistically significant. These declines starting in 2002 coincide with the redirected campaign and the introduction of the Marijuana Initiative. “They provide policy makers with broad indicators of the success of policy…However, they will not be able to answer the critical question of whether these changes were the result of the Media Campaign. These surveys do not ask respondents about their exposure and reactions to the messages of the Media Campaign that can then be linked to their drug-related attitudes and behavior.” Westat Reported That Trends in Marijuana Offers Declined over Time, but Factors Other than the Campaign Contributed to Changes in Offers Westat assessed trends in youth reports of receiving offers of marijuana— whether anyone had ever offered youth marijuana and the frequency of offers within the past 30 days. Marijuana offers are closely related to marijuana use, and the campaign aired messages that encouraged resistance to offers of marijuana. Over the 2000 to 2004 period, Westat found significant increases in the percentage of youth reporting that they had never received offers, and it also found significant decreases in the percentage of youth reporting that they had received offers in the prior month. Westat also found significant changes in offers over 2002 to 2004, during the period of the redirected campaign, and these changes were generally consistent with the trends for the overall 2000 to 2004 period. Further, on the basis of longitudinal analysis of the relationship between offers in one period and marijuana use in the subsequent period among youth who were nonusers in an initial survey round—an analysis that assesses whether offers precede use or are simply a correlate of it— Westat found that youth who reported having received a marijuana offer at one period were much more likely—between three and seven times more likely, depending upon age group—to have initiated marijuana use at a following period than nonusing youth who reported never having received such an offer. However, as Westat reported, while the findings on offers are favorable, they cannot be ascribed to the campaign because they may be caused by other factors, as the analysis of the relationship between offers and use did not take into account other factors that could affect use. On the Basis of Its Analysis of the Association between Exposure and Drug Use Outcomes, Westat Found No Evidence That Exposure to the Campaign Affected Initiation or Cessation of Marijuana Use From its longitudinal analysis of associations between exposure and initiation of marijuana use, Westat found no evidence that increased exposure to the campaign reduced youth’s initiation of marijuana use. Westat’s longitudinal analysis assessed the effects of exposure at one survey wave on marijuana initiation at a subsequent survey wave, controlling for potential confounding variables that could affect the exposure initiation relationship. Westat assessed the effects of two types of exposure on initiation of marijuana use—general exposure and specific exposure. General exposure represents the sum of recalled exposure to anti-marijuana advertising in four types of sources of advertisements— television and radio, movies and videos, print media including newspapers and magazines, and outdoor media. Specific exposure represents the sum of recalled exposure to youth-targeted individual campaign television advertisements that had been aired in the 60 days prior to an interview. Westat found no significant effects of the level of general exposure on marijuana use initiation, either over the entire period of the campaign or between subperiods as defined by survey rounds. Westat also found no overall effects of levels of specific exposure on marijuana initiation during the entire period of the campaign, but it found one significant association between specific exposure and marijuana use initiation that occurred in the data from wave 7 and its wave 9 follow-up, or during the period of the Marijuana Initiative. Wave 7 was the first complete survey wave covering exposure to the Marijuana Initiative. The significant association from this analysis was that higher levels of specific exposure were associated with higher levels of initiation of marijuana use among previously nonusing youth. Westat also examined the longitudinal relationships between exposure and initiation for nine subgroups of youth (two sexes, three race/ethnicity groups, two risk groups, and two nonoverlapping age groups). For several subgroups, it found significant associations between specific exposure and marijuana initiation. These associations were in a direction that was unfavorable to the campaign, in that greater specific exposure was associated with higher levels of initiation. The subgroups for which these unfavorable associations were most pronounced included 12½- to 13-year- olds, girls, African Americans, and lower risk youth. On cessation and reduction of marijuana use, Westat assessed two outcomes among current marijuana users: the rate at which they quit using marijuana and their frequency of use. The frequency of use measure allowed for campaign effects to be observed if users did not quit but reduced their use of marijuana. Westat estimated that the quit rate—the percentage of prior-year users reporting that they no longer used marijuana—among prior-year users of marijuana was 24.8 percent. However, it found no statistically significant association between general exposure and quitting or between specific exposure and quitting. It also found that among adolescent marijuana users, the frequency of use— increase, decrease, or no change—was not affected by exposure to the campaign. Conclusions A well-designed and executed multiyear study of the impact of the ONDCP anti-drug media campaign on teen initiation of drug use, or cessation of drug use, shows disappointing results for the campaign. The study provides no evidence that the campaign had a positive effect in relation to teen drug use, and shows some indications of a negative impact. Some intermediate outcomes, such as parents talking with children about drugs, and doing fun activities with their children, showed positive results in that the media campaign encouraged parents to adopt these behaviors. However, other intermediate outcomes, such as parents’ monitoring of their children’s behavior, were not shown to be affected by the campaign. Moreover, the evaluation did not provide evidence that intermediate outcomes that showed positive results translated into greater resistance to drugs among the teenage target population. Unfavorable preliminary findings from the evaluation were reported by Westat in 2002. Beginning in 2002, ONDCP took a number of steps that were intended to strengthen the power of the campaign to achieve positive results. These steps included more rigorous ad copy testing and a concentration on anti-marijuana messages. However, the post-2002 results yielded no evidence of positive impacts and some evidence of negative and unintended consequences in relation to marijuana use. Specifically, exposure to advertisements during the redirected campaign was associated with higher rates of marijuana use initiation among youth who were prior nonusers of marijuana. Most parents and youth recalled exposure to the campaign messages and, further, they recognized the campaign brand. Thus, the failure of the campaign to show positive results cannot be attributed to a lack of recognition of the messages themselves. This raises concerns about the ability of messages such as these to be able to influence teen drug attitudes and behaviors. It raises questions concerning the understanding of the factors that are most salient to teens’ decision making about drugs and how they can be used to foster anti-drug decisions. Westat’s evaluation is centered on this particular configuration of a media campaign as it was presented from 1999 to 2004, and its results pertained to the campaign nationwide. It cannot be construed to mean that a media campaign that is configured differently from this one cannot work. Nor should its results be construed to mean that in some locations, for some groups of youth, the campaign did not have an effect on drug use. However, substantial effort and expertise were brought to the task of designing the advertisements from the outset, and the 2002 redirection of the campaign placed even greater emphasis on copy testing and enhanced ONDCP oversight. This casts some doubt on the notion that a better media campaign can lead to positive results. It is also important to note that two recent smaller studies in three locations have provided evidence of a limited effect of the campaign for some youth, and it is quite possible that additional analyses of the NSPY data using different methods or measures may find other effects of the campaign, at least for some adolescents, than have been produced by Westat’s evaluation team. The data from the evaluation have only recently been made available to academic and other researchers, and while the analyses undertaken by Westat are, as we have noted elsewhere, appropriate and thorough, they are not exhaustive. It is heartening that surveys intended to measure teen drug use, such as Monitoring the Future, are showing declines in marijuana use in recent years. Indeed, NPSY also shows some evidence of a decline in drug use among teens. However, Monitoring the Future and other surveys of teens concerning drug use are not linked to exposure to the media campaign, and NPSY shows no relationship between anti-drug media campaign exposure and favorable drug outcomes for teens. This seems to indicate that other unidentified factors, other than the anti-drug media campaign, are affecting drug use decisions among teens. Although ONDCP has pointed to declines in teen drug use and credited the campaign along with other prevention efforts as contributing to significant success in reducing teen drug use, trend data derived from the Monitoring the Future survey that show declines in teen marijuana use from 2001 to 2005 do not explicitly take into account exposure to the campaign, and therefore, by themselves, cannot be used as evidence of effectiveness. ONDCP has indicated in the past, and we concur, that because these surveys cannot link their results with the media campaign, they do not measure campaign effectiveness. The evaluation of the media campaign reinforces the lack of linkage between the media campaign and teen drug use behavior. It is important to note that virtually all social science research is imperfect. Attempting to systematically observe and document human behavior in real-world settings is a daunting task given the extremely wide variation in both humans and settings. We believe that the evaluation of the ONDCP media campaign is credible in that it was well designed given the circumstance of the campaign, and appropriately executed. Matter for Congressional Consideration In light of the fact that the phase III evaluation of the media campaign yielded no evidence of a positive outcome in relation to teen drug use and congressional conferees’ indications of their intentions to rely on the Westat study, Congress should consider limiting appropriations for the National Youth Anti-Drug Media Campaign beginning in the fiscal 2007 budget year until ONDCP is able to provide credible evidence of the effectiveness of exposure to the campaign on youth drug use outcomes or provide other credible options for a media campaign approach. In this regard we believe that an independent evaluation of the new campaign should be considered as a means to help inform both ONDCP and Congressional decision making. Agency Comments and Our Evaluation We provided a draft of this report to the Director of the Office of National Drug Control Policy for comment on July 31, 2006. ONDCP provided us with written technical comments on the report, which we incorporated where appropriate. In addition, ONDCP provided written comments about our report in which it raised a question about our matter for congressional consideration and outlined a number of concerns that it had with our report on Westat’s findings. These written comments are reproduced in appendix II. In our evaluation of ONDCP’s written comments, we address each of the other concerns in the order ONDCP presented them. Westat Evaluation’s Role in Judging the Impact of the Advertising Campaign “ONDCP, on the other hand, is measuring the impact of the Media Campaign with a thorough, rigorous, and independent evaluation. The nationally representative evaluation is being conducted for ONDCP by the National Institute on Drug Abuse (NIDA).…The evaluation is a 4-year longitudinal study of parents’ and their children’s exposure and response to the Media Campaign.…ONDCP will be able to assess the extent to which changes in anti-drug attitudes and beliefs or drug using behavior can be attributed to the Media Campaign.” ONDCP officials had opportunities during the evaluation to raise concerns about Westat’s design and its efforts to establish a link between exposure to the campaign and outcomes, but we are not aware of their having done so. However, we are aware of ONDCP’s participation in a NIDA-sponsored expert panel review of Westat’s evaluation that was held in August 2002. Our review of the minutes of that meeting reveals that while an ONDCP official raised concerns about issues such as assessing the nonadvertising components of the campaign and the number of interim reports, ONDCP officials did not at that time raise concerns that the evaluation was fundamentally flawed. The consensus of the expert panel was that Westat’s evaluation was “pretty impressive” given the challenges presented by the absence of baseline data and of an experimental design. Panel members also asserted that Westat’s use of propensity score models to isolate the effects of the campaign was termed both “sensible” and “state- of-the-art.” ONDCP further states that major advertisers evaluate the success of their campaigns by rigorously testing advertisements prior to airing and by developing correlations between messages and consumer attitudes and behavior. While we do not dispute whether this is a commonly used approach among major advertisers, we believe that in assessing the expenditure of public funds researchers should attempt, where feasible, to establish causal relationships or use research designs that attempt to isolate the effects of federally funded interventions. While we acknowledge that establishing causal relationships is difficult, we maintain that Westat used sophisticated and appropriate statistical methods that aimed to isolate the effects of recalled exposure to the campaign on youth drug use. Further, adopting a methodology that relies upon correlations between advertising messages and an outcome, such as reductions in youth drug use, without attempting to take into account many of the other factors that could affect drug use allows for too many post hoc explanations of findings. Westat’s analysis included socioeconomic factors, parent characteristics, television viewing habits, risk of using drugs, and sensation-seeking tendencies to be able to determine whether exposure was related to drug use net of the influences of these factors. We conclude, on the basis of our assessment of Westat’s methods, that exposure to campaign messages generally did not influence youth drug use net of these other influences. ONDCP notes that correlational findings have been used to assess anti- tobacco advertising campaign results. We have not reviewed the anti- tobacco campaign and cannot comment on its relationship to youth smoking prevalence. We notice, however, that in ONDCP’s comments on “Consequences of Further Budget Cuts,” it appears to contradict its statements about establishing causal relationships to determine the effect of advertising campaigns. ONDCP writes, “Previous studies have established a relationship between exposure to anti-tobacco messages and smoking rates among teens.” ONDCP goes on to draw an analogy between anti-smoking messages and anti-drug messages to write, “We should expect similar results for illicit drug use if anti-drug messages decline.” These statements emphasize very directly the same kind of causal relationships that ONDCP cites as not appropriate in its opening comments. We also note that ONDCP indicates in its comments that it has made multiple refinements to the media campaign on the basis of earlier findings from the Westat study. This seems to be inconsistent with a position of major concerns with the fundamental soundness of the study. Finally, the three research papers that ONDCP cites on page 2 of its comments on “Conflicting Evidence from Other Research” all use exposure-response methodologies that are analogous to Westat’s and all attempt to isolate the causal effects of exposure either to ONDCP’s campaign or to other media campaigns. Thus, it would seem that ONDCP’s comment that efforts to isolate causal effects of media campaigns are fundamentally flawed would also apply to these three studies. ONDCP Made Campaign Changes as a Result of Westat Interim Findings “These surveys [MTF, the National Household Survey on Drug Use, and the Youth Risk Behavior Survey] will permit the determination of whether drug use behavior and related attitudes and beliefs changed after the launching of Phase III of the Media Campaign in mid-1999. However, they will not be able to answer the critical question of whether these changes were the result of the Media Campaign. These surveys do not ask respondents about their exposure and reactions to the messages of the Media Campaign that can then be linked to their drug-related attitudes and behavior.” More recently in late 2005, ONDCP launched a newly designed campaign. The impact of this campaign is not known and should be independently evaluated. Other Youth Drug Use Findings ONDCP believes we did not provide adequate discussion of studies that report findings contrary to those of Westat. Our report mentions two of the three studies that ONDCP identifies—the Longshore and Palmgreen studies. Our report does not mention the third study, Slater, because it focused on a different anti-drug media campaign approach and not on the ONDCP media campaign. Overall, these studies’ findings are not necessarily “contrary” to Westat’s findings. Rather, they assess small slices of the youth population or particular circumstances (such as other programs that could reinforce an anti-drug message) and find some positive results. The Westat national findings do not preclude the findings of positive results for some subpopulations of youth. The Palmgreen study, for example found a positive effect for the media campaign on high- sensation-seeking youth, but did not find an effect on non-high-sensation- seeking youth in the two moderate size communities in which the study was conducted. The distribution of these youth in the nationwide population could be consistent with both studies being correct. Our objective was to assess the Westat study as a national evaluation of the impact of the national campaign. In the Slater study, after being trained in the use of campaign media materials, leaders in each of eight communities that received a media campaign were allowed to develop their own media strategies and were able to use whatever materials they chose or developed on their own. This approach emphasized the flexibility to adopt different media strategies deemed appropriate by individual communities and not the use of a single national strategy. ONDCP expressed concern that we had not discussed Westat’s hypothesis concerning why the campaign might have contributed to youth experimentation with marijuana. We are unable to draw a conclusion about this hypothesis based on Westat’s report, nor do we have additional information upon which to base an assessment. ONDCP also faults our report for not discussing other potential competing explanations for the substantial downturn in teen drug use and increase in anti-drug attitudes. Although this is beyond the objectives of this report, we note that multiple other indicators of youth responsibility also seem to be trending in a positive direction at the same time that MTF reports declines in youth drug use. For example, from 1991 through 1999, the teen pregnancy rate declined by 27 percent and from 1991 through 2002, the teen birth rate fell 30 percent. Similarly, the number of juvenile homicides declined by 44 percent from 1993 to 2002, and the juvenile violent crime arrest rate fell by more than 40 percent from 1994 to 2003. All of these trends—including declines in drug use—could be related to broader environmental, familial, or other influences. The coincidence of these trends with drug use trends indicates that factors other than the campaign could be responsible for the decline in drug use and points to the necessity of trying to isolate the effects of the campaign, rather than relying upon simple correlations. Steps Taken to Remedy Potential Problems ONDCP states that it has taken extensive “due diligence” steps that are briefly acknowledged in our report, but that our report “fails to acknowledge the thoroughness of our actions to identify, assess, and attenuate any possible negative consequences of the campaign once Westat reported the possibility of such an effect.” Apart from those actions described in Westat’s evaluation reports, a full discussion of the steps that ONDCP took in response to Westat’s interim evaluation reports that highlighted the possibility of unintended negative consequences of exposure to the campaign on youth initiation of marijuana was not salient to our assessing whether Westat took appropriate steps to address the evaluation implementation challenges that it faced. However, Westat’s findings for the period from 2002 to 2004 showed that the campaign also was not effective after ONDCP took these steps. ONDCP Cites Major Changes in Campaign ONDCP states that the campaign is substantially different from what it was when the last data were collected by Westat more than 2 years ago. We are not in a position to comment on ONDCP’s new campaign (“Above the Influence”), launched in November 2005, as these current efforts are beyond the scope of our report and outside the time frame of the Westat data collection. At this time, neither we nor ONDCP have empirical information with which to assess this revised campaign. However, Westat’s evaluation showed that neither the campaign as initially implemented nor the redirected campaign implemented after 2002 was effective. Hence, although a new and improved campaign may be effective, Westat’s findings raise concerns about whether any campaign can affect youth drug use, especially since the lack of effect does not seem to be related to recognition of campaign ads, but rather to subsequent impact on attitudes and behaviors. Finally, ONDCP cites the receipt of awards from both the advertising and communications industry for its newest campaign. While laudable, these awards are not evidence that the new campaign will change youth drug attitudes and behavior. Only an independent evaluation can assess the current campaign’s effectiveness. ONDCP Offers an Alternative Explanation for Counterintuitive Results ONDCP stated that there is growing research evidence showing that asking people a question about their future behavior influences the subsequent performance of the behavior in question. ONDCP then indicates that the use of a panel design for the Westat study with repeated interviews of youth concerning drug attitudes and behaviors might, itself, have resulted in increased perceptions that drug use is widely pervasive among youth. If, during the course of the Westat study, ONDCP and NIDA, who acted as monitor for the study, felt that the study itself—that repeated interviews of youth by Westat concerning the campaign and drug attitudes and behavior—was resulting in a negative effect, it would have been appropriate for them to discontinue the study to avoid potential harm to subjects. Although ONDCP raised this issue in its comments to us, neither ONDCP nor NIDA mentioned this issue in any of our previous meetings specific to this engagement. ONDCP Takes Issue with the Timing of Our Review ONDCP said that the “long delay” in receiving our assessment of the Westat report has prevented it from making progress on the next round of evaluation. We note that Westat’s draft final report was not made available to us until spring 2005 (not 2 years ago as seems to be indicated in ONDCP’s comments). The volume of reports from the 4½-year study, and the complexity of the review required a great deal of time from our most skilled social scientists and statisticians. Time was required to ensure that our review of the Westat study was both comprehensive and correct. Points Concerning Our Matter for Congressional Consideration ONDCP said that our matter for congressional consideration—that Congress consider limiting appropriations until ONDCP is able to provide credible evidence of the effectiveness of exposure to the campaign on youth drug use outcomes—offers insufficient detail concerning how to demonstrate satisfactory evidence of progress and that it was puzzled by our lack of recommendations to ONDCP for improving the campaign. Our mandate was to assess Westat’s evaluation and to draw conclusions about the reliability of its findings so that Congress could make decisions about funding for the campaign, and developing suggestions for improvements to the media campaign itself was beyond our scope. In so doing, we focused on Westat’s methods and efforts to address challenges in implementing the evaluation. Our matter for congressional consideration was intended to allow ONDCP to explore a number of approaches to providing credible evidence of campaign effectiveness to Congress. Our report clearly indicates that one approach is the one applied in the Westat evaluation, which is the focus of this report, but we do not want to rule out other approaches. At the same time, we acknowledge that providing such evidence is not easy. ONDCP Posits Consequences of Further Budget Cuts ONDCP states that further budget cuts to the campaign could have far- reaching and unfavorable consequences in youth drug use. Given that the Westat findings show that the campaign was not having a positive impact, we found no evidence that a reduction in campaign advertisements would have a negative impact. ONDCP cites the 2005 MTF as an indicator of media campaign effectiveness by indicating that the reduction in anti-drug messages has resulted in a flattening of 8th graders’ perception of risk. Again, as ONDCP has indicated, the relationship cannot be assessed with MTF because it does not ask respondents about their exposure and reactions to the messages of the media campaign that can then be linked to their drug-related attitudes and behaviors. Failure to continue the media campaign’s efforts, according to ONDCP, is “raising a white flag to those who favor drug legalization, with the expectation that youth drug use soon would begin to rise, reversing years of hard-earned positive news.” In our view, on the other hand, continuation of programs that have been demonstrated not to work diverts scarce resources from programs that may be more effective. We are sending copies of this report to other interested congressional committees and the Director of the Office of National Drug Control Policy. We will make copies of the report available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact either Nancy Kingsbury at 202-512-2700 or by e-mail at KingsburyN@gao.gov or Laurie Ekstrand at 202-512-8777 or by e-mail at EkstrandL@gao.gov. Contact points from our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix III. Appendix I: Westat’s Methods for Addressing Evaluation Implementation Issues This appendix provides additional details about how Westat’s addressed evaluation implementation issues related to the coverage of the National Survey of Parents and Youth (NSPY), sample attrition, and its analytic methods. Coverage in the NSPY The NSPY was a nationwide household survey of youth aged 9 to 18 and their parents. Westat used a dual-frame sampling frame—or list of the members of the population from which the sample was ultimately selected. One frame—the area frame—consisted of housing units that had been built by late 1991; the second frame—the building permit frame— consisted of building permits issued between January 1990 and December 1998 for new housing. Combined, these frames constituted an estimated 98 percent of dwelling units nationwide that existed by the end of 1998. A household had to meet two criteria in order to be eligible to be included in the NSPY sample: It had to (1) contain children within a specified age group and (2) be a housing unit that was built before April 1, 1990, was a mobile home, or was selected from a roster of building permits for new housing units issued between January 1990 and December 1998. To identify households that met these conditions, Westat drew a sample of dwelling units and from this sample it screened households to determine their eligibility for inclusion in the NSPY, that is, whether a household contained children in a specified age group, where the specified age groups were children aged 9 through 13, 12 and 13, or 9 through 18. According to estimates provided by Westat, after completing enrollment in the NSPY—which occurred during waves 1 through 3—the NSPY sample covered more than an estimated 95 percent of occupied dwelling units (households) nationwide. From its sample of occupied dwelling units, Westat developed rosters of households that were believed to contain youth in the target age range. At this second stage of sample enumeration, Westat experienced a drop-off in the coverage of households that were believed to be eligible for inclusion in the sample. The number of eligible households enumerated in the NSPY was 30 percent smaller than the number expected from the 1999 Current Population Survey (CPS) data. According to Westat, coverage losses in the NSPY could have occurred for several reasons: (1) because an interviewer may have decided to classify a household as an ineligible household rather than as a nonresponding household, (2) because the household respondent took cues from the screening questions to avoid selection into the sample by giving an incorrect answer, or (3) because the doorstep enumeration process was considered to be intrusive. Westat reported that it could not conclusively rule out the first explanation for coverage losses. However, it undertook sample validation procedures that examined whether ineligible households in the recruitment waves were misclassified, and it found none. Neither Westat nor the National Institute on Drug Abuse reported that undercoverage was primarily due to respondents avoiding selection into the sample by taking cues from the screening questions and giving incorrect answers as a way to avoid selection into the sample. Overall, Westat reported that the main reason for undercoverage was the rostering component of the survey, which required actual entry into the home, and led to “a great many respondents” asking the interviewer to come back at a later date, only to repeat the request when the interviewer reappeared. Westat inferred that this represented passive refusal to participate. Therefore, according to Westat, most of the coverage losses occurred during the doorstep screening process in which simple, focused screening questions about the composition of the household were used to identify households from which to sample eligible youth. NSPY and CPS Comparisons of Distributions on Analyzed Variables In response to questions from us, Westat provided data that indicated that the coverage losses in the NSPY did not result in differences in the estimated distributions of population characteristics from the NSPY as compared with those estimated from the CPS data. In other words, the distributions of characteristics of eligible households with youth included in the NSPY were broadly consistent with a variety of corresponding distributions from the 1999 Current Population Survey. The comparisons of NSPY-estimated populations to CPS-estimated populations were based on weighted NPSY estimates, where the weights adjusted for nonresponse at the doorstep and household enumeration (roster) stages, and the weights also reflected the differential probabilities of retaining a household for the NSPY depending on the screener group to which it was applied. These weights were calculated prior to Westat’s poststratification calibration techniques, which brought the estimated NSPY population totals into line with the estimated CPS population totals. Hence, if upon using the weights based only on the probability of selection and nonresponse adjustments, the population characteristics in the NSPY differed widely from those derived from the CPS, this would constitute evidence of potential bias in the NSPY sample due to undercoverage. Westat compared NSPY and CPS distributions for each of the three enrollment waves of the NSPY (waves 1 through 3) on several variables, including the race/ethnicity of the householder and the presence of males 28 years of age or older, the distribution of eligible households by the age of the youth in the household, the age and gender distributions of youth, and the age distributions of youth by race and ethnicity. Each of these comparisons involved discrete subgroups within the focused subpopulation of the NSPY. The largest differences between the NSPY and CPS estimates arose in the comparison of the distributions by race/ethnicity of household and the presence of a male 28 years of age or older in the household. Some of these differences could also arise from sampling variance, as both the NSPY and CPS estimates are based on samples that are subject to sampling errors. Although Westat did not provide sampling errors with the estimates that it provided to us, some of the differences in distributions could be apparent, as opposed to real, differences, in statistical terms. Undercoverage in the NSPY and Other Widely Known and Used Longitudinal Surveys Coverage issues are not an uncommon problem with surveys that focus on relatively small subpopulations within a larger population, such as occurred with the NSPY’s focus on youth aged 9 to 18. The NSPY’s target population of households with youth aged 9 to 18 focused on a subpopulation that, according to 1999 CPS data, constituted about 25 percent of the roughly 104 million households in the United States. The estimated extent of undercoverage of eligible youth in the NSPY was comparable to the extent of undercoverage in other well-known and widely used longitudinal surveys. Both the National Longitudinal Survey of Youth (NLSY)—sponsored by the Bureau of Labor Statistics—and the National Immunization Survey of Children (NIS)—sponsored by the National Immunization Program (NIP) and conducted jointly by the NIP and the National Center for Health Statistics of the Centers for Disease Control and Prevention—focus on specific subpopulations, and both experienced undercoverage that was comparable to that of the NSPY. The 1979 NLSY is a nationally representative sample of men and women born in the years 1957 to 1964 who were ages 14 to 22 when first interviewed in 1979. It had a coverage rate of 68 percent. The 1979 NLSY has been widely used and cited to examine a wide variety of policy issues. As documented in the National Longitudinal Surveys’ annotated bibliography, about 3,100 journal articles, working papers, monographs, and other research documents have been catalogued as having used the 1979 NLSY data. The target population for the NIS is children between the ages of 19 and 35 months living in the United States at the time of the interview, and it has been conducted annually since 1994. The survey involves the selection of a quarterly probability sample of telephone numbers, and the coverage has been about 20 percent lower than estimated by two other benchmark surveys. Survey data are used primarily to monitor immunization coverage in the preschool population in the nation and to provide national, state, and selected urban area estimates of vaccination coverage rates for these children. Sample Attrition across NSPY Interview Rounds In the NSPY, respondents initially recruited into the sample were to be tracked for three additional survey rounds that covered about a 3-year period following the recruitment round. By the final survey round of the NSPY, the cumulative response rate—the percentage of youth or parents in eligible households that completed all four interviews—reached between 50 percent and 55 percent. These cumulative response rates after four survey rounds were determined largely by the response rates during the enrollment waves, as postenrollment, Westat was able to track, contact, determine eligibility for reinterview, and complete interviews for between 82 percent and 94 percent of previously interviewed respondents between two successive interview waves. The response rates achieved for the first three survey waves—the enrollment waves—were generally similar. Specifically, about 74 percent to 75 percent of the dwelling units determined to be eligible for the survey in waves 1 through 3 completed the household enumeration (or rostering of youth). After obtaining consent to conduct interviews from parents and youth, interviewers completed extended interviews—that is, completed the full NSPY questionnaire—with about 91 percent of the sampled youth in each of waves 1 through 3. Among sampled parents, about 88 percent gave consent and completed extended interviews in the enrollment waves. (See table 2.) Across the three follow-up rounds of the NSPY, Westat achieved between an 82 percent and a 94 percent longitudinal response rate. Follow-up required that respondents be tracked over time and across places, as persons enrolled in the sample could move, and their eligibility for a follow-up interview had to be determined. For example, youth who turned 19 years of age between survey rounds would no longer be eligible for reinterview, as they were beyond the target age of the campaign. Efforts to track individuals prior to the second survey round included verifying address change information with the U.S. Postal Service and obtaining location information from a national database company. Westat obtained updated location information from these sources, and telephone interviewers placed calls to these households to verify the identity of respondents. According to Westat, a high proportion of the households that moved were contacted and respondents verified their new addresses. During the third and fourth survey rounds, Westat used procedures to track and verify addresses that were similar to those used to track respondents from the first to second survey rounds, although Westat modified these procedures as necessary. The key eligibility requirement for youth for a follow-up interview was the youth had to be 18 years of age or younger at the time of the interview. For the first follow-up round—waves 4 and 5—Westat located individuals and determined eligibility for 92 percent of the youth and 92 percent of the parents who completed an initial interview during the first round of the survey—that is, in waves 1, 2, and 3, and of these youth who were still eligible, 94 percent completed an interview. Among parents from the first round who were tracked and determined to be eligible in the second round, 92 percent completed a second round interview. In the third and fourth survey rounds of the NSPY, between 96 percent and 97 percent of the youth and parents who had completed prior round surveys were tracked and determined to be eligible, and of these, the youth response rates were 96 percent and the parent rates were 95 percent. Comparisons of Respondents and Nonrespondents across NSPY Survey Waves Even with the relatively high follow-up response rates that Westat achieved, it is possible that respondents could differ from nonrespondents in follow-up rounds, and if so, the NSPY estimates of the effects of exposure on outcomes would be biased. Westat provided data that compared nonrespondents to the respondents across the three enrollment waves, indicating that with some differences, nonrespondents were generally similar to respondents with respect to characteristics that might affect survey outcomes. Nonrespondents were compared to respondents on gender, age at interview, whether both parents were in the household, the number of youth in the household, the type of household dwelling, and the type of area in which the household was located. For example, apart from the three differences below, nonrespondents and respondents were similar in characteristics across survey waves: In the three enrollment waves, nonrespondents were proportionately older youth than respondents; in waves 2 and 3, there were proportionately more youth living in cities among nonrespondents than respondents; and in wave 1, there were proportionately more youth in the building permit sample among nonrespondents than respondents. Differences in Sampling Methodologies between NSPY and MTF Westat compared estimates of drug-use prevalence from the NSPY data with those obtained from other national surveys such as Monitoring the Future (MTF). While the NSPY estimates of marijuana use prevalence differ over some periods covered by the NSPY from those derived from the MTF survey of youth in school, differences between the two surveys’ sampling frames and methodologies mean that direct comparisons between the two surveys must be made with caution and must take the methodological differences into account. Specifically, MTF showed a decline in marijuana use for some teenage groups during the 2000 to 2002 period, while the NSPY showed the increases reported above. However, the difference in drug use rates reported from the two surveys could plausibly arise from differences in the sampling frames. The MTF sampling frame covers only youth who are in school and not those who drop out of school, who are truant on the survey day, or who are 17- and 18-year-olds who have graduated from high school. To the extent that high school dropouts and truants have more involvement with drugs than those who stay in school, the MTF estimates of drug use may underrepresent drug use among all youth of high school age. By comparison, the NSPY household survey includes youth who are not enrolled in school in its sampling frame. To the extent that dropping out of high school is correlated with drug use, and given that dropouts are excluded from the MTF sampling frame, differences in drug use between MTF and NSPY could reflect the fact that youth enrolled in high school reported drug use at different rates from all youth in the general population covered by the NSPY, which would include dropouts who may be at higher risk of using drugs. The Capacity of the NSPY to Detect Reasonably Small Effects One challenge in designing surveys to evaluate changes in outcomes as the result of an intervention lies in selecting a sample with sufficient power to detect differences between groups—including the same individuals at two points in time—or significant associations among variables, such as between levels of exposure to the campaign and outcomes. Sample size is a major factor determining a study’s power to detect differences, and while larger sample sizes will generally allow researchers to detect smaller differences over time, as the size and power of a sample to detect changes increases, so too generally does its cost. In consultation with the National Institute on Drug Abuse (NIDA), Westat chose to compute power for analyses of annual change in a prevalence statistic—that is, change in the percentage of a population that reported an outcome. For purposes of its power analysis, Westat chose to assume different baseline prevalences for parents and for youth of all ages and to assume that the study should be able to detect reliably declines of specified sizes. For example, for youth of all ages, Westat assumed a baseline prevalence of 10 percent and determined the power of its sample for detecting a minimum downswing in an outcome—such as past-month drug use—of 2.3 percentage points over a year. The power of the sample to detect this difference was well within conventional power criteria. As reported above, the sizes of differences that Westat’s sample could detect were consistent with the Office of National Drug Control Policy’s (ONDCP) goals for the campaign. In early meetings on the design of the evaluation of the media campaign, ONDCP officials reported that ONDCP had a specific Performance Measures of Effectiveness (PME) system and that the campaign was embodied within the first goal of the National Drug Strategy, which was to “educate and enable America’s youth to reject illegal drugs as well as the use of alcohol and tobacco.” Under this goal, ONDCP’s PME proposed targets for reducing the prevalence of past-month use of illicit drugs and alcohol among youth from a 1996 base year: by 2002, reduce this prevalence by 20 percent, and by 2007, reduce it by 50 percent. ONDCP officials further identified specific targets for the media campaign, again with respect to a base year of 1996: by 2002, increase to 80 the percentage of youth who perceive that regular use of illicit drugs, alcohol, and tobacco is harmful; and by 2002, increase to 95 the percentage of youth who disapprove of illicit drug, alcohol, and tobacco use. To achieve a goal of 80 percent of youth who perceive that regular use of marijuana is harmful would require increasing the 1996 baseline percentage of youth perceiving marijuana as harmful from 60 percent, as measured by MTF, or by about 3.3 percentage points per year from 1996 to 2002. Westat’s sample had sufficient power to detect this amount of annual change in youth attitudes. The power of the NSPY to detect changes in outcomes due to exposure to the campaign also presumes that it was possible to accurately measure and characterize exposure to the campaign by the reported number of advertisements recalled by respondents. While the general question of how exposure to advertisements affected respondents was beyond the scope of the evaluation, if by exposure is meant a recognition-based task—or encoded exposure—then the NSPY measures of exposure can be viewed as valid. According to communications researchers, often what is of interest to campaign planners and evaluators is whether the presentation of campaign content generates at least a memory trace in individuals. At this point, a potential audience member can be said to have engaged the campaign’s presentation in a meaningful sense, and this is what is meant by encoded exposure. To measure exposure to the campaign for both youth and parents, NSPY interviewers asked respondents about their recall of specific current or very recent television and radio advertisements. There was variation in recall of advertisements by both youth and parent respondents, and this type of variation is needed in order to examine associations between levels of exposure and outcomes. For example, for the entire campaign, youth reported a median of 12 exposures per month, and 76.7 percent reported 4 or more exposures per month. Comparatively few youth—about 6 percent—reported less than 1 exposure per month. Youth recall of specific exposure also varied, as 41.2 percent of youth reported 12 or more television exposures per month throughout the campaign while reporting a median of 4.4 exposures to television advertisements. Additionally, Westat’s measures of exposure and outcomes have demonstrated sensitivity to detect favorable campaign effects among parents. Westat’s test for associations between exposure and outcomes—the gamma coefficient—was an ordinal test statistic for whether two variables (e.g., exposure and marijuana use initiation) have a montonic, but not necessarily a linear, relationship. Therefore, were there nonlinear relationships, its test would have allowed for them. Finally, nonrandom measurement error in the measure of exposure is unlikely to have biased estimates of campaign effects, as if the nonrandom measurement error were constant, it would not affect measures of association, and if it was not constant, it would be addressed by Westat’s statistical methods. Westat Methods to Measure Outcomes Westat measured a variety of outcomes for youth and parents and took steps to ensure that the measures were consistent with existing research. The youth questionnaires included numerous questions that were designed to measure exposure to the campaign advertisements and other anti-drug messages. The youth question domains included exposure propensity to media; current and past use of tobacco, alcohol, marijuana, inhalants, and Ecstasy; past discussions with and communication of anti-drug messages from parents and friends; expectations of others about respondent’s drug use; knowledge and beliefs about the positive and negative consequences of drug use; exposure to campaign messages; family and peer factors; personal factors; and demographic information. Westat used two separate questionnaires for youth of different ages; one questionnaire was used for children (aged 9 to 11) and another one was used for teens (aged 12 to 18). The NSPY parent questionnaire also included numerous questions that were intended to measure parents’ exposure to the campaign’s messages and other anti-drug messages. The question domains for parents included media consumption; past discussions with child about drug attitudes and avoidance strategies; past child monitoring behaviors; self-efficacy of discussing drugs with child and monitoring of child’s actions; belief that the child is at risk of drug use; belief that drug use has bad consequences; exposure to the campaign’s advertising, including brand recognition; parent’s own current and past use of tobacco, alcohol, and drugs; and demographic information. Westat followed generally accepted procedures in developing the survey instruments for the NSPY by using information from a prototype prepared by NIDA and using information from other surveys that addressed youth drug use and prevention. Prior to the phase III evaluation, and in preparation for the NSPY, NIDA convened an expert panel to assist in the development of the youth and parent questionnaires. The panel, which consisted of experts in adolescent drug use prevention and parenting behaviors, drafted NSPY survey questionnaires for children, teens, and parents, and NIDA shared these prototypes with Westat at the beginning of Westat’s evaluation contract. In developing the final questionnaire for the NSPY, Westat created a questionnaire development team consisting of evaluation experts. In developing the final NSPY questionnaires, the Westat team reviewed NIDA’s prototype and other surveys. Westat measured youth drug use by self-reported data on use. We have previously cautioned about limitations associated with self-reported data on youth drug use. Additionally, the National Research Council (NRC) of the National Academy of Sciences also has pointed out limitations associated with self-reported drug use in national surveys such as the National Survey of Drug Use and Health (NSDUH) and MTF. As NRC has pointed out, while self-reported data on drug use may have limitations for estimating the actual levels of use at a particular point in time, they may not suffer from these same limitations when they are used to assess changes in use over time, unless there is reason to believe that attitudes about drug use change in ways that affect respondents’ willingness to honestly report drug use, or stigma. Specifically, if there is a stigma associated with self-reporting drug use, that stigma may affect the levels of use reported, as some have argued that the propensity of respondents to give valid responses may be affected by social pressures. In particular, the incentive to give false negative reports may increase over time if drug use becomes increasingly perceived as harmful or socially unacceptable. Using data from NSDUH and MTF, NRC showed an inverse relationship between the percentages of respondents who either disapproved of illegal drug consumption or perceived it to be harmful. Thus, as stigma increased, self-reported drug use decreased. As NRC cautioned, one could interpret this relationship as indicating that changes in stigma are associated with changes in invalid reporting, or as stigma increases, false negative reports increase, rather than necessarily indicating that as stigma increases, drug use decreases. The NRC analysis leads to two inferences: First, if social stigma remains constant over time, changes in the propensity to give valid responses would be unaffected and estimates of change in self-reported drug use would not be biased by social stigma. For the evaluation results, this would imply that its measures of changes in self-reported drug use would provide valid measures of changes in use, so long as factors other than stigma did not affect the propensity to self-report use. Second, if the social stigma associated with reporting drug use is inversely related to disapproval of illicit drug use or increased perceptions that it is harmful, then the estimates of self-reported drug use are likely to decrease as a result of the stigma. According to results from the evaluation, trends in youth attitudes and beliefs about illicit drugs changed significantly over the entire campaign in a direction that was favorable to the campaign. Specifically, the trends in youth attitudes and beliefs about illicit drug use meant that youth were more likely to believe, as the campaign went on, that use of illicit drugs was likely to have negative consequences. Alternatively, the social stigma associated with drug use increased over time. If the relationship between stigma and reporting that NRC found held and applied to the data in the evaluation of the campaign, this would imply that the increased stigma associated with drug use would lead to decreases in self-reports of drug use over time. Westat’s Analytic Methods To control for the many factors that could have influenced both exposure and outcomes independently of, or in conjunction with, the campaign, Westat used propensity scoring methods to match individuals based on numerous measured attributes and to create groups of individuals who differed on their underlying propensity to be exposed to different levels of campaign advertisements. A propensity score is a weighted sum of the individual effects of variables in a model that predicts the likelihood of exposure to campaign messages. Westat’s propensity scoring methods resulted in the creation of groups of individuals who were statistically similar on exposure propensities. These groups can be considered as statistical analogues to randomly assigning individuals to different levels of exposure. After creating these groups, Westat then analyzed outcomes between the groups having different propensities to be exposed to campaign messages. Westat used ordinal logit models to estimate the chances of being exposed, where exposure was measured alternatively as a three- or four- level variable—e.g., low, medium, or high exposure. Westat used a myriad of variables to predict exposure levels in both the youth and parent models. For example, in the youth models, the propensity score models included measures of demographic attributes, educational attainment and educational aspiration, family and parent background, parent consumption of television and other media, income and employment, reading habits, Internet usage, location of residence in urban areas, among other variables. After estimating models, Westat also assessed the balance of variables in its propensity models. For propensity models to remove the effects of confounding variables from the association between exposure and response, it is necessary that the population means of the confounder variables not vary across exposure levels. If a confounder is successfully balanced, then it will have the same theoretical effect across all exposure levels. The net result of the propensity scoring models is to provide each individual with a score that reflects the individual’s propensity to recall advertisements based upon a weighted sum of all of the variables in the model. Therefore, while two individuals may differ on the likelihood that a particular variable affects their chances of being exposed to messages or on their levels of a certain variable—such as age or education—they could be similar in their overall propensity to be exposed to campaign messages if the differential effects of any individual variables sum to the same total propensity. In order for the results of propensity methods to be valid, it is important that the propensity scoring models include all relevant variables that could otherwise explain differences in both exposure and outcomes. Propensity score models can adjust only for confounding variables that are observed and measured. In other words, they are built upon the assumption that all relevant variables are measured and controlled for. If an important variable is omitted from the propensity model, the results of analyses may be affected. Westat made reasonable attempts to identify and control for a variety of confounding variables, include them in its models, and reduce bias. Appendix II: Comments from the Office of National Drug Control Policy Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the contacts named above, contributors to this report included David P. Alexander, Billy Commons, James Fields, Kathryn Godfrey, Mary Catherine Hult, Jean McSween, Karen V. O’Conor, Mark Ramage, William J. Sabol, Barry J. Seltser, and Douglas Sloane.
Between 1998 and 2004, Congress appropriated over $1.2 billion to the Office of National Drug Control Policy (ONDCP) for the National Youth Anti-Drug Media Campaign. The campaign aimed to prevent the initiation of or curtail the use of drugs among the nation's youth. In 2005, Westat, Inc., completed a multiyear national evaluation of the campaign. GAO has been mandated to review various aspects of the campaign, including Westat's evaluation which is the subject of this report. Applying generally accepted social science research standards, GAO assessed (1) how Westat provided credible support for its findings and Westat's findings about (2) attitudes, beliefs, and behaviors of youth and parents toward drug use and (3) youth self-reported drug use. GAO's review of Westat's evaluation reports and associated documentation leads to the conclusion that the evaluation provides credible evidence that the campaign was not effective in reducing youth drug use, either during the entire period of the campaign or during the period from 2002 to 2004 when the campaign was redirected and focused on marijuana use. By collecting longitudinal data--i.e., multiple observations on the same persons over time--using generally accepted and appropriate sampling and analytic techniques, and establishing reliable methods for measuring campaign exposure, Westat was able to produce credible evidence to support its findings about the relationship between exposure to campaign advertisements and both drug use and intermediate outcomes. In particular, Westat was able to demonstrate that its sample was not biased despite sample coverage losses, maintained high follow-up response rates of sampled individuals to provide for robust longitudinal analysis, established measures of exposure that could detect changes in outcomes on the order of magnitude that ONDCP expected for the campaign and that could reliably measure outcomes, and used sophisticated statistical methods to isolate causal effects of the campaign. Westat's findings on the effects of exposure on intermediate outcomes--theorized precursors of drug use--were mixed. Specifically, although sampled youth and parents' recall of campaign advertisements increased over time, they had good impressions of the advertisements, and they could identify the specific campaign messages, exposure to the advertisements generally did not lead youth to disapprove of using drugs and may have promoted perceptions among exposed youth that others' drug use was normal. Parents' exposure to the campaign led to changes in beliefs about talking about drug use with their children and the extent to which they had these conversations with their children. However, exposure did not appear to lead to increased monitoring of youth. Moreover, the evaluation was unable to demonstrate that changes in parental attitudes led to changes in youth attitudes or behaviors toward drug use. Westat's evaluation indicates that exposure to the campaign did not prevent initiation of marijuana use and had no effect on curtailing current users' marijuana use, despite youth recall of and favorable assessments of advertisements. Although general trend data derived from the Monitoring the Future survey and the Westat study show declines in the percentage of youth reportedly using marijuana from 2002 to 2004, the trend data do not explicitly take into account exposure to the campaign, and therefore, by themselves, cannot be used as evidence of effectiveness. In Westat's evaluation of relationships between exposure and marijuana initiation the only significant finding was of small unfavorable effects of the campaign exposure on marijuana initiation during some periods of data collection and in some subgroups.
Background The Department of the Interior’s Bureau of Reclamation constructs and operates water resource projects to provide water to arid lands in 17 western states. Construction, operation, and maintenance of these projects are financed primarily with federal funds. The Bureau provides water for irrigation purposes to state-established water and irrigation districts that obtain the water under contracts and distribute it to farmers. The Bureau also provides water for M&I, fish and wildlife habitat, recreation, and power generation purposes. Through water service or repayment contracts, the Bureau, over time, recoups a portion of the federal government’s costs of providing the water. These contracts provide for both the repayment of construction costs and O&M costs allocated to water supply. The Bureau’s costs of constructing, operating, and maintaining its projects are classified as either reimbursable or nonreimbursable. Reimbursable costs are recovered from customers; nonreimbursable costs are borne by the federal government. Generally, costs associated with supplying water for agriculture, M&I, and hydroelectric generation purposes are reimbursable. Costs related to flood control, recreation, and fish and wildlife enhancement generally are nonreimbursable. While the Bureau recovers costs from water customers through rates, cost recovery for power generation is done by the Department of Energy’s Power Marketing Administrations (PMAs). Water marketing costs are charged to customers as part of the Bureau’s total O&M costs. The Bureau defined the O&M activities associated with its projects in a 1998 report to Congress by describing its complex mission and by providing examples of the widely varied activities it conducts. The report stated that the Bureau is “responsible for the O&M of an extensive infrastructure of constructed facilities, including diversion and storage dams, pumping plants, powerplants, canals and laterals, pipelines, and drains.” Further, it said that the Bureau is “also responsible for management of the federally owned lands on which these facilities are located and for the natural and cultural resources of those lands.” The water marketing component of the Bureau’s O&M reflects this diversity. Water marketing costs are the result of many different distinct activities. The CVP, located in California’s Central Valley Basin, is the one of the Bureau’s largest multiple purpose water projects. The CVP is technologically complex, consisting of numerous dams, reservoirs, canals, and pumping and power-generating facilities. Historically the CVP has provided about 6 million acre-feet of irrigation water each year to approximately 3.8 million acres of cropland. Water used for irrigation purposes represents about 85 percent of the total water available through CVP; the remainder is used for M&I, fish and wildlife, and recreation. CVP water is also used to generate power. The Bureau enters into contracts to provide water to irrigation and M&I customers. These contracts establish a rate for each acre-foot of water with the intent to recover the CVP construction costs allocated to water supply as well as the annual O&M costs. The Bureau annually sets rates for both M&I and irrigation water customers. The rate depends upon the extent and type of services provided to the customer by the Bureau, and consists of a number of rate components (or cost pools) which correspond to the water services provided. The rate components generally are water storage: the cost of facilities (primarily dams and reservoirs) associated with the collection and storage of water; and water marketing: the costs incurred for monitoring, administering, and negotiating water service contracts; record-keeping and accounting; developing annual water rates; conducting environmental studies; and performing other related activities. Other costs involved in delivering water to customers include conveyance (associated with facilities such as canals used for transporting water), conveyance pumping (associated with facilities such as pumping plants used to pump water through the project to more than one customer), and direct pumping (associated with plants that pump water exclusively for specific customers). “Project Use Energy” costs are primarily associated with conveyance pumping and direct pumping. Direct pumping costs are paid by customers through the direct pumping rate component charged by the Bureau. Since fiscal year 1998, conveyance and conveyance pumping costs are typically not charged to customers through O&M rates, but instead are direct billed to those customers using these services. As a result, these costs are not included in our analyses of water marketing costs and their impact on O&M costs. The Bureau sets rates based on projected water deliveries and estimated costs. Typically, the Bureau uses its budget request data in determining its estimated costs. After allocating the total estimated costs to irrigation, M&I, and the other uses, the O&M rate for each component is then calculated by dividing the component’s estimated total annual cost by the applicable estimated water deliveries. After the actual water deliveries and the Bureau’s actual costs have been determined, the customers’ payments are applied against the Bureau’s costs as specified by the CVP ratesetting policies. The Bureau’s budget, ratesetting process, and application of payments are graphically depicted in appendix II. In recent years, some Bureau irrigation and M&I water customers have raised concerns about O&M rates charged for CVP water. The rates have been increasing faster than inflation, and the customers have expressed concern about the impact of these costs on their businesses. The rate component causing much of their concern relates to activities categorized by the Bureau as “water marketing.” The water marketing component of rates has increased significantly over the past decade and CVP customers have voiced concern, saying that they do not understand the underlying reasons for the increases. Objectives, Scope, And Methodology For each of the four objectives of this report, we were asked to address specific issues. Regarding the activities and costs that comprise water marketing, we were asked to (1) determine the general types of activities that were historically categorized as water marketing, compared to the types of activities categorized as water marketing today; and (2) provide detailed information on water marketing activities and costs for fiscal years 1998 and 1999 and obtain the projections for future years, if available. Regarding the trend in water marketing costs, we were asked to determine annual increases or decreases in the water marketing component of rates charged under water service contracts for fiscal years 1990 through 2000, and identify reasons for any significant trends. Regarding the legal basis for these costs, we were asked to determine the Bureau’s legal basis for charging the costs of water marketing activities to water customers. Regarding the information provided to water customers, we were asked to determine generally what information is provided to customers about water marketing activities and costs, including information pertaining to fiscal year 2000 and any projections for future years. We were also asked to identify options that would enable the Bureau to provide customers with better information. To determine the activities and costs that comprise water marketing and determine the trend in water marketing costs (objectives 1 and 2), we reviewed and analyzed various Bureau documents. We analyzed historical water ratesetting books showing budgeted amounts for fiscal years 1990- 2001, actual water marketing amounts for fiscal years 1992-1999, ratesetting policy manuals, and Mid-Pacific Region files pertaining to water marketing activities at CVP. We also interviewed Mid-Pacific Region officials. To determine the legal basis for charging the costs of water marketing activities to water customers (objective 3), we analyzed Reclamation law, project-specific legislation, and authorizing legislation applicable to selected activities. We also reviewed Office of Management and Budget (OMB) Circular A-25 and federal accounting standards for federal guidance as to what constitutes the full cost of an agency’s provision of goods and services. To determine the information provided to water customers, and identify options for improving it (objective 4), we analyzed examples of documents provided to customers and reports prepared by customer groups. We also interviewed Bureau officials and representatives of CVP water customers. We conducted our review from June 2000 to March 2001 in accordance with generally accepted government auditing standards. We performed audit work at the Bureau’s Mid-Pacific region, which manages and operates CVP. We also interviewed officials from the CVP Water Association, which represents customers of CVP. We did not verify the accuracy of all the data we obtained and used in our analyses. Most of the data presented in this report relate to budget projections; it was beyond the scope of the assignment to verify linkage of budget data to the audited financial statements. We have considered written comments from the Department of the Interior and revised our report, as appropriate. The department’s comments are reproduced as appendix III. The department also provided technical comments, which we incorporated as appropriate but did not reproduce. Appendix I describes our objectives, scope, and methodology in detail. Water Marketing Includes Two General Categories Of Activities Our analysis concluded that CVP water marketing costs stem from two general categories of activities: system-wide activities and contract administration activities. Water marketing traditionally consisted of contract administration activities but the category has grown to include a variety of others, as represented by the system-wide category. System-wide activities are categorized as “water marketing” in order to be “spread” to all customers for repayment. Some system-wide costs arise as new mandates are passed that the Bureau must implement, such as costs stemming from the CVP Improvement Act (CVPIA), which was passed in 1992. Other system-wide costs, such as those classified as general expense, have been reclassified as water marketing to assure that each CVP customer pays a proportionate share of the cost involved. In addition, some water conservation costs, which the Bureau had previously treated as nonreimbursable, have been reclassified as water marketing and charged to water customers. As shown later, when comparing the amount paid for water marketing in fiscal year 1996 to the amount paid in fiscal year 2001, the general expense reclassification accounted for about 12 percent of the total increase for irrigation customers and about 15 percent of the increase for M&I customers. Similarly, the reclassification of water conservation costs accounted for about 11 percent of the total increase in the water marketing rate component for irrigation customers and about 12 percent of the increase for M&I customers. Contract administration activities, according to Bureau officials, can be considered to be “traditional” water marketing activities. These include the recurring activities involved in managing the terms of the Bureau’s contracts with its customers, such as costs incurred for monitoring, administering, and negotiating water service contracts, maintaining water delivery and payment records, accounting for the annual financial results of CVP water operations, and developing annual water rates. Since 1995, the Bureau has also been incurring significant contract renewal costs as the long-term contracts held with customers expire. The Bureau allocates the costs of system-wide and contract administration water marketing activities into several sub-categories. The sub-categories relate to: programs, such as water quality monitoring, water conservation, and hazardous materials management; legislation, such as the National Environmental Policy Act (NEPA), Endangered Species Act (ESA), and CVPIA; CVP divisions or offices, such as water and power operations, water and land resources management, and general contract administration and contract renewal; and project-wide activities, such as those included in the miscellaneous project, CVP-wide, and general expense sub-categories. Table 1 shows examples of specific activities associated with each category and sub-category. Water Marketing O&M Rate Component Has Increased Significantly The water marketing component of O&M rates has been increasing steadily since 1989 for both irrigation and M&I customers, with significant increases since 1995. The increases are due to rising costs in both the system-wide and contract administration categories. As a result, water marketing costs currently make up the largest share of CVP’s O&M rates. Figure 1 shows that the water marketing component of rates has been increasing for both irrigation and M&I customers. As the figure shows, the rate for irrigation customers increased from $.20 per acre-foot in 1989 to $6.91 per acre-foot in 2001. M&I customers experienced a similar increase—from $.21 per acre-foot in 1989 to $7.00 per acre-foot in 2001. Since fiscal year 1998, the total O&M rate has typically been comprised of the water marketing and storage rate components. Figure 2 compares the water marketing and storage rate components for irrigation. As figure 2 shows, while both water marketing and storage rate components have increased for irrigation customers since 1989, water marketing has overtaken storage to become the largest rate component. Water marketing comprised about 11 percent of the combined total of the water marketing and storage rate components in 1989 and about 62 percent in 2001. For M&I customers, water marketing comprised about 12 percent of the combined total of the water marketing and storage rate components in 1989 and 61 percent in 2001. The trend in inflation-adjusted dollars is similar to that in nominal dollars. As shown in figure 3, water marketing and storage costs have increased and water marketing has become the largest rate component. Converting the nominal dollars into constant year 2000 dollars shows that the irrigation water marketing rate ranged from $.26 in 1989 to $6.17 in 2000, and the storage rate from $2.17 per acre-foot in 1989 to $4.27 per acre-foot in 2000. The total inflation-adjusted irrigation O&M rate (water marketing plus storage) would have ranged from $2.43 per acre-foot in 1989 to $10.44 in 2000. To demonstrate more specifically which categories and sub-categories of water marketing contributed most to cost increases, we reviewed in detail the costs for irrigation and M&I users for fiscal years 1996 and 2001.Tables 2 and 3 show the costs of irrigation and M&I-related water marketing activities for fiscal years 1996 and 2001 and the amount of change over that period of time. As shown in tables 2 and 3, both categories of costs—system-wide and contract administration—have contributed to the increase in irrigation and M&I water marketing costs since 1996. The costs related to system-wide activities have been the primary driver of the water marketing cost increases, rising from $5.3 million in 1996 to $16.6 million in 2001 for irrigation, and from $.9 million in 1996 to $2.3 million in 2001 for M&I. A wide range of activities have contributed to the overall increase in water marketing costs over time, and the mix of activities responsible for the increase differs from year to year. However, certain activities resulted in significant costs over the 1996-2001 time period. These include the costs within both the system-wide and contract administration categories, such as the costs of new technology; costs associated with contract renewals; costs of environmental assessments; and new or increased emphasis on initiatives such as water conservation and water quality monitoring. Costs classified as general expenses have also contributed to the water marketing increase. As discussed previously, general expenses involve a variety of project-wide costs that, until fiscal year 2000, had been allocated to other rate components such as water storage. Beginning in fiscal year 2000, general expenses have been included in the water marketing rate calculation and recovered from all customers. The effect of this reclassification of general expenses to water marketing is shown in tables 2 and 3. When comparing the amount paid for water marketing in fiscal year 1996 to the amount paid in fiscal year 2001, the general expense reclassification accounted for $1.65 million of the $13.66 million (about 12 percent) of the total increase in water marketing costs for irrigation customers. Similarly, the general expense reclassification accounted for $234,000 of the $1.52 million (about 15 percent) of the total increase in water marketing costs for M&I customers. In addition, in fiscal year 1999 the Bureau reclassified water conservation costs to water marketing and began to recover them through rates charged to water customers. These costs had previously been classified as nonreimbursable and therefore had not been recovered. The effect of this reclassification of water conservation to water marketing is shown in tables 2 and 3. When comparing the amount paid for water marketing in fiscal year 1996 to the amount paid in fiscal year 2001, the water conservation reclassification accounted for $1.50 million of the $13.66 million (about 11 percent) increase in water marketing costs for irrigation customers. Similarly, the water conservation reclassification accounted for $181,000 of the $1.52 million (about 12 percent) increase in water marketing costs for M&I customers. Combined, the reclassification of general expense and water conservation accounted for $3.15 million of the $13.66 million (about 23 percent) increase in water marketing costs for irrigation customers when comparing fiscal year 1996 to fiscal year 2001. The reclassification of general expense and water conservation accounted for $415,000 of the $1.52 million (about 27 percent) increase in water marketing costs for M&I customers. The following list provides examples of activities responsible for water marketing budget increases during fiscal years 1996 to 2001 as described in the Bureau’s explanations to its customers. Water & power operations (increase: irrigation $2.96 million; M&I $.37 million): Replacement of the Centralized Water and Power System Control over a 5-year period with new equipment and software that controls the dams and other facilities. Both the old and new systems were run in parallel during the conversion period. Increase in the Hydromet System (which provides precipitation, temperature, and snow water content data) for new equipment and to fully fund the Bureau’s contract with the State of California for snow surveys. Developing a new CVP Operating Criteria and Plans for consultation under Section 7 of the ESA requiring technical assistance. Required conversion of the CVP radio systems to a new system due to a change in operating frequencies. NEPA/ESA compliance (increase: irrigation $2.26 million; M&I $.30 million): Implementation of biological opinions in conjunction with contract renewals. Implementation of endangered species conservation plans. Water & land resources management (increase: irrigation $1.70 million; M&I $.22 million): Increased resources for activities including review of proposals for joint use of lands, wetlands development, and ground-water recharge. Extensive coordination with water users; other federal, state, and local agencies; and special interest groups. Increased demand for services such as irrigation and drainage technical assistance, land classification, crop reports, and emergency response planning. Water conservation (increase: irrigation $1.50 million; M&I $.18 million): Establishment of a water conservation office whose staff provides technical support and services for water conservation measures. Other CVP-wide programs (increase: irrigation $1.36 million; M&I $.19 million): Replacement of narrow band radios and CO2 fire suppression systems. Increased coordination costs of the CVP and State Water Project models and data gathering. Increased demand for Geographical Information Systems in support of water management and development activities. Budgeted Water Marketing Costs Differ from Actual Costs In some years the Bureau’s projected water marketing costs, which are the basis for customers’ rates, have varied significantly from actual costs. Actual costs are affected by unanticipated O&M needs. Depending on the customers’ water needs, actual water deliveries can vary from projected deliveries. The uncertainty of estimating O&M costs in advance of the year’s actual events, will result, as shown in table 4, in projected costs varying from actual costs. As shown in table 4, actual water marketing costs differed from projected for each of fiscal years 1992 through 1999. For irrigation, the differences ranged from about $80,000 in 1995 (2.0 percent from the projected amount) to almost $3.5 million in 1996 (37.7 percent from the projected amount). The difference between the Bureau’s projected and actual costs has ranged from underestimating by 9.4 percent to over estimating by 37.7 percent. For M&I, the difference has ranged from about $4,000 in 1992 (1.2 percent from the projected amount) to $598,000 in 1997 (39.3 percent from the projected amount). The difference has ranged from underestimating by 1.2 percent to overestimating by 39.3 percent. Actual costs are allocated to each customer’s account based on actual water usage. The customers’ payments (which are paid in advance of water deliveries) are to be applied against the Bureau’s costs as specified by CVP ratesetting policies. If actual costs allocated to a customer exceed the customer’s payments, the deficit is recovered through adjustments to subsequent years’ rates. If payments exceed allocated costs, the overpayment is applied to any balances the customer has in the following accounts: unpaid prior year O&M costs; interest on unpaid prior year O&M costs; and construction costs. The order in which a customer’s payments are to be applied to these balances is different for irrigation and M&I customers and is shown in appendix II. Water Marketing Costs Are Recoverable As Normal Operations And Maintenance Costs Water marketing costs are charged to customers as part of the Bureau’s total O&M costs. The Bureau’s ability to determine which O&M costs to charge to customers is governed by reclamation law, project-specific legislation, and specific provisions of contracts between the Bureau and water users. Within these constraints, the Bureau has broad discretion to decide which O&M costs to charge to customers. Federal courts have confirmed the Secretary of the Interior’s broad discretion to define what can properly be assessed as an O&M expense. During the course of our review, we did not identify any activities that did not appear to be O&M related to CVP, or any costs that were precluded by law from recovery. OMB guidance and federal accounting standards also guide the Secretary. This guidance indicates that the full costs incurred by the federal government in providing services should be recovered from beneficiaries of those services, unless such cost recovery is precluded by law. For example, OMB Circular A-25 provides guidance for federal agencies to use in setting fees to recover the full costs of providing goods or services.OMB Circular A-25 defines full costs as all direct and indirect costs of providing the goods or service. This definition is consistent with that contained in federal accounting standards. The federal accounting standards define the full cost of an entity’s output as “the sum of (1) the costs of resources consumed by the segment that directly or indirectly contribute to the output, and (2) the costs of identifiable supporting services provided by other responsibility segments within the reporting entity, and by other reporting entities.” Applying the definitions of “full cost” used in OMB Circular A-25 and federal accounting standards indicates that the full cost of the water supplied by the Bureau includes all direct and indirect costs incurred in providing these services and that these costs should be recovered, except where precluded by law. We reviewed the Bureau’s exercise of its discretion to classify water marketing costs as O&M that is charged to customers for reimbursement. We analyzed selected fiscal year 2001 Budget Activity Plans, and determined that the activities described in the plans could reasonably be defined as O&M related to CVP. We further analyzed several of these plans and determined that the described activities were not specifically excluded from recovery by law. Thus, our review of these documents did not identify any activities that were not O&M activities related to CVP or any costs that were precluded by law from recovery. However, in the course of our work we did note that the Mid-Pacific Region is not recovering certain costs associated with employee postretirement health benefits and Civil Service Retirement System employee pension costs related to reimbursable project purposes that we recommended, on May 31, 2000, that the Bureau recover. These costs were estimated by the Mid-Pacific Region to total $709,000 for fiscal year 1999. As we previously recommended, these costs should be recovered as part of the Bureau’s normal O&M costs. Our recommendation is still under consideration by the Bureau. Customers Receive Substantial Information, But The Information Could Be More Complete The Mid-Pacific Region has made substantial progress in involving customers in the budget process. The region provides many documents to customers, responds to numerous questions about the budget and rates, and holds several budget update meetings annually. However, providing some additional information at the beginning of the budget process would facilitate customers’ understanding of the Bureau’s planned activities and their ability to review and provide input to the planned activities. The House Committee on Appropriations, reporting on the Energy and Water Development Appropriations Bill of 1998, stated that “The Committee strongly encourages the Bureau of Reclamation to create new opportunities for water and power customers to participate in the review and development of O&M budget priorities for their respective Bureau of Reclamation projects.” In response to this committee’s concerns, the Bureau’s Commissioner issued a directive dated September 24, 1998, that required Bureau regions to involve customers in the budget process. The directive states, in part, that the Bureau is to provide customers with the opportunity to assist in formulating O&M activities and cost estimates, and setting priorities in which the customers share in the responsibility or pay a portion of the O&M cost with Reclamation.” The Mid-Pacific Region has involved customers in the budget process in a variety of ways. In 1999 and 2000, the region held meetings that discussed budget priorities and reviewed budget execution. The Bureau also provided information to customers about its ratesetting process, during which budget data are converted to water rates. Examples of budget and ratesetting information provided to customers are provided below. Budget priority information. The Bureau provides detailed descriptions of activities planned for the upcoming budget year to customers. This information is based upon Budget Activity Plans, which are submitted by program managers at both regional and area office levels. Among the major informational categories the Budget Activity Plans include are: the name of the activity; whether the activity is new or ongoing; which appropriation will fund the activity; a description of the activity, and justification for undertaking it; laws authorizing the activity; and years of future estimated costs. At the outset of the budget process, the Bureau asks the customers to prioritize the activities described in Budget Activity Plans. In preparing the final budget, the Bureau considers customer input and other factors such as Bureau-wide priorities and the amount of funding available for the year. We analyzed the fiscal year 2001 Budget Activity Plans the Bureau identified as involving water marketing activities, and found that the preparer had usually provided information that was indicated by the various categories in the document. However, we found that although the plans call for budget estimates for future years, no information was given regarding previously budgeted costs. As a result, customers cannot determine whether the estimated amounts represent increases in previous estimates. In addition, it was not possible to determine whether the described activity was reimbursable—that is, whether the irrigation or M&I customer would have to pay for the costs of the activity. Budget execution information. The Bureau provides information that compares budgeted costs to actual costs during the year. This provides customers with information regarding whether the Bureau is providing services within budgeted amounts. For example, the information provided to customers at the region’s March 2000 meeting included: Water marketing budgeted costs compared to expenditures and obligations through February 2000 (42 percent of the year complete). The information was categorized by type of activity and by the CVP office responsible for the program. Budgeted costs compared to expenditures and obligations for non- water marketing rate components, such as storage. Adjustments made to budgeted costs because of reclassification of costs from storage to water marketing and analysis of reimbursable costs that result in the final rates for irrigation and M&I. Information on M&I and irrigation expenditures and obligations broken down into labor and non-labor as of the end of February 2000. Ratesetting information. The Bureau prepares a ratebook each year that details the O&M rate applicable to each customer. Customers and CVP Water Association staff review and comment upon the ratebook, which specifically identifies the customer’s water marketing costs. The ratebook also includes tables showing each customer’s capital and energy costs for the year; historical and projected water deliveries; and prior year actual cost data. In addition, the Bureau: Responds to frequent written and telephonic questions from customers about rates. Regularly attends CVP Water Association meetings, at which rates are a primary subject of discussion. The CVP Water Association includes representatives of the CVP’s largest customers, and is the customers’ focal point for interaction with the region on rates. Prepares an analysis, which is summarized in the ratebook, that identifies significant changes from the preceding year’s budget and determines if any significant changes have occurred in the budget as of the time that the annual water rates are computed. Conclusions Water marketing costs have increased significantly, but we found no evidence that the costs were derived from activities other than normal O&M activities that are recoverable under applicable law. However, our analysis of the information provided to water customers confirmed that the customers were not able to determine whether (1) budgeted activities were ones that would actually be charged to them, and (2) budgeted amounts for the coming year’s activities represented increases in previous estimates. Recommendation The Bureau of Reclamation’s Mid-Pacific Region’s Regional Director should ensure that appropriate regional personnel: Provide additional budget information to customers at the beginning of the budget process that would enable them to determine whether the budgeted activities are ones that will actually be charged to them; and Provide additional information during the subsequent ratesetting process to enable customers to determine whether the amounts estimated for budget activities represent changes in prior year estimates. Agency Comments And Our Evaluation We provided the Department of the Interior an opportunity to comment on a draft of this report. The Department, in a letter from the Acting Chief of Staff to the Assistant Secretary for Water and Science, generally agreed with our report, but suggested that we clarify the wording of our recommendation with respect to the timing of information to be provided to customers. The Department suggested that the recommendation for informing customers whether the estimated costs of the upcoming year’s activities represent changes in previous estimates be revised so that this information is provided not at the beginning of the budget development process, but rather during the subsequent ratesetting process. We considered this a reasonable view. Accordingly, we are calling on the Bureau to (1) provide customers information at the beginning of the budget process necessary to determine whether the budgeted activities are ones that will actually be charged to them, and (2) provide information in the subsequent ratesetting process to enable customers to determine whether the amounts estimated for budget activities represent changes in prior year estimates. The Department also stated that the Bureau considers the recommendation to be implemented. We agree that the Bureau has begun implementing the recommendation by capturing the necessary information in its database. However, implementation will not be complete until the Bureau actually provides the customers with the recommended information. The Department’s comment letter is reproduced in appendix III. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this letter until 10 days from its date. At that time, we will send copies to appropriate House and Senate Committees; interested Members of Congress; The Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. This letter will also be available on GAO’s home page at http://www.gao.gov. We will also make copies available to others upon request. Please call me at (202) 512-9508 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. Appendix I OBJECTIVES, SCOPE, AND METHODOLOGY We were asked to address specific issues related to the Bureau’s water marketing activities and costs at CVP, including (1) the activities and costs that comprise water marketing, (2) the trend in associated costs, (3) the legal basis for charging the costs of water marketing activities to water customers, and (4) the type of information about water marketing activities provided to water customers and ways to improve the information. Specifically, for each of these issues we were asked to address: (1) Water Marketing Activities and Costs The general types of activities that the Bureau has categorized as water marketing, both currently and historically. The specific activities that comprised water marketing during fiscal years 1996 through 2000 and the charges associated with each activity. The projected water marketing charges for fiscal year 2001. (2) Trends in Water Marketing Costs The annual changes in water marketing charges for fiscal years 1990 through 2000, and the reasons for any significant trends. (3) Legal Basis for Charging Water Customers The Bureau’s legal basis for including costs classified as water marketing in water rates. (4) Information Provided to Customers The general information about costs classified as water marketing that the Bureau provides to its customers. The specific information provided regarding fiscal year 2000 water marketing costs. Options, if any, that would enable the Bureau to provide customers with more detailed information. We addressed these issues in the following ways. (1) Water Marketing Activities and Costs Interviewed Bureau officials regarding changes in water marketing over time. Reviewed applicable definitions in law, manuals, and written policies. Reviewed Bureau budget documents that listed and described the specific work activities constituting the water marketing category of costs. (2) Trends in Water Marketing Costs Obtained Bureau documents showing annual water marketing charges for fiscal years 1996 through 2000 and information on actual costs, to the extent available. Analyzed the data and prepared graphs to identify trends over time. Compared charges to actual costs (if available) and interviewed Bureau officials to determine reasons for substantial differences between charges and actual costs. Obtained projected water marketing charges for 2001, and compared them to prior years. (3) Legal Basis for Charging Water Customers Reviewed applicable law and documents and interviewed Bureau officials to determine whether water marketing is defined in law. Reviewed prior GAO work regarding the legal requirement that O&M costs be recovered in water rates. Reviewed selected water marketing activities to determine whether any law exists that excludes the specific activity from being recovered in rates. Interviewed Bureau officials regarding water marketing cost recovery. (4) Information Provided to Customers Reviewed documentation of meetings between the Bureau and customers in fiscal year 2000 regarding water marketing charges. Analyzed documents provided to customers to determine whether they described the types of work being charged to customers as a cost of water marketing. Interviewed Bureau officials and representatives of the CVP Water Association, which represents CVP customers, to determine whether more detailed water marketing cost information was warranted. We also reviewed information provided by the CVP Water Association. We conducted our review at the Bureau’s Mid-Pacific Region, which operates and manages CVP, from June 2000 through April 2001 in accordance with generally accepted government auditing standards. To the extent practical, we corroborated data provided by the Bureau through interviews and by comparing amounts to amounts shown on audited financial statements. However, in most cases we did not verify the accuracy of the data provided. We have considered written comments from the Department of the Interior and revised our report, as appropriate. Appendix II O&M Water Rate Setting & Application of Payments O&M Water Rate Setting & Application of Payments President’s budget sent to Congress (Feb) (Oct) (60 days) Appendix III Comments from the Department of the Interior Appendix IV GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual named above, Dave Bogdon, Brian Eddington, Edda Emmanuelli-Perez, Larry Feltz, Jeff Jacobson, Mary Merrill, and Maria Zacharias made key contributions to this report.
This report discusses the water marketing activities of the Bureau of Reclamation's Central Valley Project and their associated costs. Water marketing costs have risen significantly since 1989, but GAO found no evidence that the costs were associated with activities other than normal operation and maintenance activities that are recoverable from water customers under applicable law. GAO reviewed the information provided to customers and found that the customers were unable to determine whether (1) budgeted activities were the ones that would actually be charged to them and (2) budgeted amounts for the coming year's activities represented increases in previous estimates.
Background The VA health care system, established in 1930, is one of the nation’s largest direct delivery systems. VA’s health care facilities provide services to veterans both with and without service-connected disabilities. Individual facilities vary widely in the inpatient, outpatient, and long-term care services they provide. For example, some facilities provide only basic clinical care; whereas, others have capabilities to provide special care such as for organ transplants, spinal cord injuries, or chronic mental illness. VA historically allocated funds to its facilities on the basis of the facilities’ past expenditures, with incremental increases for such factors as inflation and new programs. Beginning in 1985, VA modified its allocation system because it recognized the need to more directly relate funding to the work performed and the cost to perform it, and to improve the efficiency and productivity with which medical care is delivered to veterans. The Resource Allocation Methodology (RAM) was VA’s first attempt to better link resources to workload. VA ended RAM in 1989 because of concerns that facilities had inappropriate incentives to perform work beyond their resources, possibly affecting quality of care and resulting in a budget crisis at some facilities. Between 1990 and 1993, VA again based allocations on making incremental changes to facilities’ historical budgets. But to further its efforts to link resources and workload and to provide data that it could use to improve quality and efficiency in the system, VA implemented its current RPM system for the fiscal year 1994 allocation process. System Goals and Design The Secretary of VA, in endorsing the new RPM system, stated he hoped it would improve VA’s management of limited medical care resources, enable VA to explore opportunities to improve quality and efficiency in its health care system, and better define future resource requirements. To those ends, VA’s stated goals for RPM are to (1) improve its resource allocation methodology, (2) move from retrospective to prospective workload management, and (3) reform medical care budgeting. VA has established high expectations for how RPM would improve the equity of its allocations. VA hoped to better link resources and facility workload, move to prospective workload management by forecasting workload changes, and provide for differences in facility efficiencies in the allocations. VA also hoped that by forecasting workload changes, it could better establish and justify its budget requests. VA envisioned the system overcoming inconsistencies in facilities’ provision of care to veterans by allowing for a more equitable distribution of resources to meet veteran needs systemwide. Finally, by identifying facility differences, VA intended that the system would provide managers with useful information, including the matching of resources to quality of care issues. Part of this effort to improve resource allocation involved linking the budget allocation process to VA’s strategic plan. The strategic plan was to be the driving force behind RPM, providing it with a set of goals, performance standards, and workload priorities. Furthermore, the system’s link to the strategic plan was intended to allow consideration of service distribution, practice patterns, geographic factors affecting costs, and access differences. RPM was also designed to be a patient-based system. It differs from past resource allocation processes in defining workload as patients served rather than as procedures performed—this is the basis for VA’s characterization of RPM as “capitation-based.” For resource allocation purposes, the RPM database, managed by VA’s Boston Development Center (BDC) in Braintree, Massachusetts, integrates workload data, case mix, and costs to project facility-specific resource needs. With significant input from VA managers in the field and Headquarters, BDC has developed a complex data analysis process to estimate facility unit costs. Generally, this process involves adjusting for case mix differences by classifying patients into clinical classes and groups, forecasting changes by class in the numbers of patients served, and developing average costs per patient type that are then applied to the number of expected patients in each group to achieve a preliminary budget estimate. The facility estimates are adjusted to reflect inflation, VA regional input, and facility efficiencies. A further discussion of the RPM system is in appendix II. Because resource allocation is a sensitive and complex undertaking in VA’s health care system, VA has made a considerable investment in it. Significant VA Headquarters and field managers’ time and effort is spent adjusting the RPM methodology from year to year—one reason the process is continually changing. In addition to the 26 BDC staff responsible for data processing and education efforts, VA Headquarters chief financial officer, quality management, operations, and clinical staff also provide input to the process through frequent meetings. Facility directors sit on the RPM Field Oversight Committee, a group of about 15 managers (with 10 to 20 support staff and visitors usually present) who meet regularly to discuss implementation issues. Six technical assistance groups comprising physicians and other clinicians in each clinical area generally represent the RPM clinical groups and advise on clinical issues such as the classification of patients. Other RPM committees include a Planning Group and a Financial Advisory Group, which assist in determining forecasting methodologies and advise on the correct allocation of costs. While many parties provide input to the RPM process, the Budget Policy and Review Committee, comprising VA associate chief medical directors and other senior VA managers, makes the final recommendation on the resource allocation methodology, which the Under Secretary for Health approves. System’s Potential to Improve Equity May Not Be Realized The resource allocation system shows mixed results with regard to the two aspects of equity that we examined. The system design produces data that point to potential inequities so that VA can better link resources to facility workloads. However, VA has not yet used the system for this purpose. VA has not designed the system to address the goal of providing greater consistency in veterans’ systemwide access to services. System Design Provides Data on Potential Inequities The resource allocation system provides VA managers with data that compare facility costs on a standardized workload unit basis and in this way, provides data that could point to potential inequities in allocations.Through an outlier process, the system identifies facility cost differences, a feature that allows VA to reallocate monies from the budgets of the highest cost facilities to those with the lowest costs. VA places facilities into one of nine facility groups that it considers comparable based on a complex consideration of factors such as affiliation with teaching facilities and size. Then, to provide a fair comparison, the system “levels the playing field” by adjusting for differences among facilities such as case mix, locality costs, salaries, training, and research. After adjustments are made, the system considers variations in workload costs to be more indicative of efficiency differences than facility characteristics. Comparative data show that even after adjustments are made, significant facility cost variations remain. Variations typically ranged 30 percent or more within each facility group. Appendix III and figure III.3 provide an example of the variations the RPM data show in adjusted costs per workload for one group of facilities that VA considered comparable. Another important aspect of the RPM system is its ability to forecast workload changes. For each patient class, the system forecasts the number of patients that facilities are likely to see, based on historical trends. The forecasting process recognizes that facility workloads are changing at relatively different rates and that facilities’ clinical workloads or “case mix” are also changing. For example, the system forecasts that patient classes for pulmonary disease patients or ear, nose, and throat patients are generally expected to decrease in fiscal years 1994 and 1995; whereas, classes for outpatients or human immunodeficiency virus (HIV) patients are expected to increase. System forecasts showed rates of change for total patients expected to be seen at facilities ranging from an 8-percent decrease to a 15-percent increase between 1993 and 1995. VA Has Done Little to Change Past Facility Allocations Despite cost variations and differing workload changes among facilities reflected in RPM data, VA has done little to use the data to change facility allocations. We estimate that 1 percent was the maximum real decrease in allocations that any facility had in 1995 based on RPM budget adjustments. While one facility gained as much as 3.4 percent through the process, the average uninflated gain was also about 1 percent. Facility budget changes for RPM Facility Group 5 are shown in figure 1. Appendix IV contains data for facilities nationwide. VA made two significant decisions that limited the resource allocation adjustments to facilities’ budgets: By limiting the movement of resources between the high- and low-cost facilities, VA in effect allowed the wide variations in patient costs among facilities to continue. VA limited the amount of dollars moved between high- and low-cost facilities to $10 million in fiscal year 1994 and $20 million in fiscal year 1995. VA did not include enough resources in its RPM allocations to fully fund all the facilities’ expected needs and distributed the shortfall by limiting the amount of resources given to those facilities with growing workloads.Furthermore, for those facilities with decreasing workloads, VA chose to limit their budget decreases. These decisions led to funding for the projected cost of increased workload at approximately 17 cents on the dollar. At the same time, facilities with decreasing workloads were given more money than needed to support the forecasted workload. Appendix III discusses the impact of this decision in further detail. Both of these decisions on VA’s part had a greater impact on those facilities that historically had received less funding for their workloads—and therefore were shown to have lower workload costs—than those that had relatively faster growing workloads. For example, the Carl T. Hayden Medical Center adjusted workload costs were 16.8 percent lower than those of other facilities that VA considered comparable in mission and size, and its forecasted workload growth was 4.5 percent—third highest among comparable facilities between 1993 and 1995. However, because of VA’s decisions that limited the reallocation of funds, Carl T. Hayden experienced a 2.2-percent increase in uninflated funding between 1993 and 1995. By comparison, the Long Beach Medical Center—the “high outlier” in the same comparative facility group as Carl T. Hayden—had adjusted workload costs that were 13.9 percent higher than other facilities and a forecasted workload decrease of approximately 1 percent. Long Beach’s funding decrease was less than 1 percent in 1995 (before the inflation adjustment). A further discussion and data related to the system’s provision of funding for workload are in appendixes III and IV. The System Does Not Address Veterans’ Unequal Access to Care Part of VA’s original plan for RPM was to use it to help alleviate inconsistencies in veterans’ access to outpatient care—a plan that has not materialized. Consequently, inconsistencies that we reported in the past are likely to remain, as demonstrated by differences in facilities’ ability to provide outpatient care. We reported in 1993 that veterans’ access to outpatient care at VA facilities varied widely—veterans within the same priority categories received outpatient care at some facilities but not at others. This occurred because VA facilities were given discretion to determine whether to ration, or limit, discretionary or nonmandated care when resources are insufficient to care for all veterans. While considerable numbers of veterans have migrated to southeastern and southwestern states, there was little shift in VA resources. As a result, facilities mainly in the eastern states were more likely to have adequate resources to treat all veterans seeking care than other facilities. VA facilities in other states have adapted by restricting veterans’ access to care. Our 1993 report found that 118 facilities indicated they rationed outpatient care for nonservice-connected conditions, while 40 facilities reported no rationing. The facilities that did ration used different methods to determine who got care. Some rationed on the basis of economic status, others on the basis of medical service or medical condition. Consequently, significant inconsistencies existed in veterans’ access to care both among and within centers. In responding to our report and in correspondence to the Congress, VA indicated that the RPM system would consider and help overcome inconsistencies among facilities in veterans’ access to outpatient care, allowing for a more equitable distribution of resources to meet outpatient needs systemwide. However, this vision remains unfulfilled. The system does not distinguish between facilities’ discretionary and mandatory workload in determining past and forecasting future workload. Consequently, the access problems we reported in 1993 are likely to have continued. VA management systems, however, still lack reliable data on facilities’ rationing or denial of care, which prevented us from confirming the extent to which the rationing we reported earlier still exists. But available data indicate that the ability of facilities to provide care to discretionary categories of veterans still varies. For example, fiscal year 1994 data indicate that although up to 13 percent of some facilities’ patients were veterans in a discretionary category because they had nonservice-connected conditions and higher incomes, other facilities treated none of these discretionary patients. Appendix V discusses these differences further. VA Barriers to Equitable Allocations Can Be Overcome VA officials offered a number of reasons for not reallocating larger percentages of dollars in fiscal years 1994 and 1995, thereby addressing the goal of better linking resources to workload. These reasons included the need for a transition period, the difficulty facilities would have adjusting efficiently to large annual budget changes, and the need to evaluate the reasons for the cost variations and whether to include more of VA’s resources in the RPM system. With regards to the goal of reducing access differences, officials expressed uncertainty over how the system could be used for this purpose. Although VA is planning to reallocate more funds for the fiscal year 1996 budget cycle, further changes are needed to establish equitable allocations. VA’s original plans for the system remain valid and in line with current governmentwide efforts to develop strategic plans and performance measurement systems. These efforts, legislated under the Government Performance and Results Act, provide for performance measurement as the basis for improving government operations and, eventually, linking desired outcomes to resource allocation. Although we and others have recognized the inherent difficulties of linking performance measures and budgeting, VA has opportunities to improve the equitability of its allocations by revisiting its original plans for RPM and forging long-range plans for working toward its original visions. VA Cited Several Barriers to Reallocating More Money The basis for VA’s facility allocations remains largely unchanged because VA officials decided to limit the changes to facilities’ budgets, rather than because the RPM design or process does not allow them to do so. Officials cited several reasons for not using the RPM system data to reallocate larger amounts in fiscal years 1994 and 1995. Among those reasons were the following: A transition period was needed. VA officials indicated that time was needed to educate facility managers and to obtain facility buy-in to the process. Also, VA made several changes during the first years of the process to help address facility concerns about the accuracy of data that facilities submit. Facilities cannot efficiently adjust to large budget changes. VA officials believed that absent plans to phase in resource changes over a 3- to 5-year period, facilities could not efficiently adjust to large changes in their budgets in any single year. A facility does not know what its allocation for the fiscal year will be until shortly before the year starts—depending on how soon the VA medical care appropriation is determined. Officials believed it unreasonable to expect facility directors to adjust to significant changes given the short lead time between when they learn what their budget allocations will be and the start of the fiscal year. Furthermore, officials believed that facility directors had few management options for reducing operating budgets because 70 percent or more of facilities’ budgets is spent on salaries. Reasons for variations are unclear. Officials indicated that they lacked a good understanding of what causes the variations, which some thought could be attributed to factors, such as quality of care, that are not considered in the adjustment process. For example, high-cost facilities may provide higher quality or more timely care and may not necessarily have higher costs because of operating inefficiencies. At the same time, low-cost facilities may be efficient and may become less so if given more money for the same workload. Because VA does not have a standard for what facility unit costs should be, the current process “titrates budgets to the mean,” that is, only very slowly brings facility budgets closer to the mean. VA officials further maintained that the RPM allocations alone could not be used to judge the equity of facility budgets because facilities get funds that are not distributed through RPM. About $4.1 billion, or 25 percent, of the fiscal year 1995 medical budget was allocated to VA facilities by processes separate from the RPM system. About $2.3 billion of the $4.1 billion was allocated to facilities at the beginning of the year and included funding for items such as the community nursing home contract program, activation of newly constructed facilities, outpatient fee-basis care, prosthetics, and resident training. The remaining $1.8 billion in non-RPM-allocated funds paid for leases, travel, and patient care programs such as dental programs and women veterans health programs. In part, these funds also were to pay for contingencies that arose through the year. An assessment of the equity of these allocations, and their impact on the relative equity of the RPM system allocations, could not be made with available data. While VA’s financial system accounts for individual transactions to facilities throughout the year, it does not summarize for each program the amount received by each facility. VA officials agreed that some non-RPM resources support patient care operations, such as those for prosthetics or facility activations, and indicated that they had conducted special evaluations of non-RPM accounts to determine whether any of the funds should be allocated through the RPM system. As a result of these evaluations, the percentage of the medical care funds allocated through RPM increased from 66 percent in fiscal year 1994 to 75 percent in fiscal year 1995. VA documents indicate that at the time of our review, VA was considering establishing a formal process to ensure that non-RPM funds are inventoried, monitored, and considered for possible inclusion in RPM. VA Unclear About Why System Was Not Used to Address Access Differences Why VA has not used its resource allocation system to help overcome inconsistencies in veterans’ access to care was not clear because confusion existed at VA over what needs to occur to meet this goal. For example, some officials indicated that legislative reforms to current eligibility requirements were needed to ensure greater consistency in eligibility determinations when veterans seek care. However, other officials’ statements to us and the Congress indicated that the resource allocation system would be used regardless of legislative reforms and that the delay was attributable largely to the absence of useful eligibility data and the difficulty of incorporating this goal in the RPM model. While we agree that reform that simplifies VA’s complex eligibility requirements might allow VA to more easily consider veterans’ access differences in allocating resources, we do not believe such legislation is a prerequisite to meeting this goal because the Congress has established the priorities for the provision of veterans’ care. Because this issue has not yet been resolved within VA, the management support and responsibility for ensuring the mechanisms are put in place to achieve this goal are lacking. Changes Needed to Provide for More Equitable Facility Allocations VA officials indicated they were taking several steps to more actively use the RPM system data and to improve the resource allocation process. First, given that the initial 2 years of the system’s implementation were intended to help facilities adjust to the new process, the Deputy Under Secretary for Health told us in September 1995 that VA was planning to reallocate a significantly larger amount of money for the fiscal year 1996 facility budgets based on RPM. Furthermore, officials indicated that they were implementing a Decision Support System to better coordinate VA’s clinical and financial data systems and allow VA to compute more accurately the costs of specific services provided to each patient. Nonetheless, we believe that several additional changes are needed to foster facility budget changes and to provide for more equitable allocations. In particular, VA should take steps to address other notable barriers that limit VA’s ability to reallocate funds, as discussed below. Linking Strategic Plans to Resource Allocation Could Help Facilities Adjust to Budget Changes If the provision of comparable resources for comparable workload is a goal, long-term strategies to help facilities adjust to changing budgets must be put in place. VA’s resource allocation could be made more equitable if it is clearly linked to VA’s strategic plan goals, performance standards, and workload priorities. In particular, VA could coordinate its future plans for facility missions, services, and capacity with its facility budgets over time, establishing a plan for phasing in resource changes and giving facilities and VISN managers financial objectives with which they can plan more than 1 year in advance. Linking resource allocation to VA planning efforts is not a new idea in VA. Starting in 1992, VA developed what was known as the National Health Care Plan (NHCP) to coordinate RPM, VA strategic planning, and other VA planning efforts. NHCP was developed by a multidisciplinary committee charged with looking at facility missions, identifying gaps and overlaps in services, and developing a planning process. However, VA officials told us the draft plan was preempted by the Clinton administration’s push for national health reform in 1994. Efforts to determine how VA would be integrated within the administration’s health reform plan superseded other planning efforts within VA. After NHCP was dropped, strategic planning reemerged in early 1995 in a plan that the Under Secretary for Health set forth to the Congress to restructure VA to make it a more efficient and patient-centered health system. As previously mentioned, the plan would further decentralize VA operations by establishing 22 VISNs throughout the country to coordinate and integrate VA’s health care delivery assets. A key part of the VISN plan is that VISN directors would be held responsible for strategic planning, with greater systemwide direction in strategic planning as well. It is not clear from current VA planning documents how the VISN and VA systemwide strategic plan might interact with resource allocation and how resources will be allocated to VISNs. It is not evident what VA’s plan is for moving facilities and VISNs toward more comparable funding for comparable workload and achieving the coordination between planning and resource management envisioned in NHCP. As it implements its new VISN structure, VA will need to link its planning and resource allocation processes and establish long-range plans for using resource allocation to help achieve its goals. Better Understanding of Cost Variations Would Help Support Budget Changes To better link resources to workload, manage limited resources, and ensure quality of care, VA could establish a review and evaluation process as part of the formal RPM system. Although VA has spent considerable time and effort determining how the system should use and develop data to produce facility budgets, few resources have been devoted to determining why the system shows such significant cost variations among facilities. Understanding these variations could help VA improve its comparisons of facilities’ efficiencies by providing information on how further adjustments might increase the comparisons’ fairness. These adjustments might include other locality-specific, mission-related, or data-reporting factors that may contribute to cost differences. Finally, VA could identify potential ways that quality of care or other aspects of facility performance are affected by resources. With a better understanding of the variations, decisionmakers could make more informed decisions on the RPM system adjustments necessary to compare facilities fairly and set expectations for how facilities should adjust to changing resource levels. Originally, the RPM system was designed to include a review and evaluation element that could help provide feedback to VA managers on how facilities performed compared with their expected workloads and costs. Structured site reviews of high- and low-cost facilities were intended to help determine possible reasons for the cost variations by identifying efficiencies and allowing a closer assessment of the potential impact of resources on quality. Furthermore, VA hoped to better link cost data with quality indicators so an assessment of resources’ impact on quality could be made. In its 1994 Quality Management Plan, VA set forth how it would assess progress in delivering quality health care to veterans. VA reported that it sought to produce resource profiles for each level of the organization that could be analyzed for connections between quality of care and resource availability. The RPM system was envisioned as a critical part of this effort. For example, it was expected to provide information about facilities with resource profiles that suggested resources were insufficient and to lead to reviews that could ensure more consistent care across the VA system. VA anticipated that by the end of fiscal year 1994, RPM would match resources to quality of care issues and improve information for management at all levels. None of these original plans for RPM has yet materialized, apparently because of VA’s priorities, time constraints, data on quality becoming available only recently, and lack of consensus on how to implement VA’s original plans. An example of how decisionmakers can be given information on health care cost variations was illustrated in a report by the Prospective Payment Assessment Commission (PROPAC), which advises the Congress on Medicare issues. PROPAC has analyzed state variations in per capita health care costs in order to understand the implications of the wide variations in the delivery and financing of health care nationwide. It has identified factors that contribute to cost differences across states, such as the mix and volume of services; mix of physicians, medical specialists, and other health professionals practicing in a state; and policy-related factors such as state licensing requirements or regulations that influence the amount of labor used to provide health services. PROPAC also determined that 6 of the 10 states with the best health status were among the 10 with the lowest standardized resource costs per enrollee. The limited effort VA has put into understanding possible reasons for variations has already achieved some change in facility management, according to the VA official overseeing the technical advisory groups of physicians and other clinicians who advise RPM on clinical issues. The Chronic Mental Illness Technical Advisory Group had assessed discharge cost, costs per day (possibly reflecting staffing levels), length of stay, and other data related to high- and low-cost facilities for chronic mental illness patients and provided facility management with information on factors potentially contributing to their facility’s high or low cost. Explore Options for Tracking Allocations by Program Area To ensure that the RPM allocations are coordinated with those made through other allocation processes, VA needs to establish a formal process for evaluating whether non-RPM-allocated funds should be incorporated into the RPM system. In doing so, VA will need to track by facility the non-RPM allocations, by program, over the course of the year as well as those made under RPM. VA officials indicated that current financial systems would allow a manual tracking of these allocations. We believe VA needs to explore options for using existing financial management systems to capture these management data. The availability of these data would allow for better assessments of the total funding provided to facilities for patient care and the priorities that the various allocation processes use to distribute facility funding. Explore Options for Considering Facility Differences in the Provision of Discretionary Care To ensure that veterans within the same priority categories are afforded more equal access to care, VA needs to explore options for using the resource allocation system to achieve this goal. VA would need to assess the extent that current databases could be used to distinguish and account for the facility differences in their rationing practices and abilities to provide discretionary care. VA may also need to determine how to collect more specific data on differences in facilities’ provision of care, for example, differences in the extent facilities are providing services to veterans in their area (market share) or in the extent veterans are denied health care services because of a lack of resources. Conclusions VA for years has struggled with implementing an equitable resource allocation method—one that would link resources to facility workloads and foster efficiency. The need for such a system has become greater in recent years as veterans’ demographics shift and as health care delivery undergoes dramatic changes to adjust to increasingly limited resources. The resource allocation system can help VA achieve this goal by forecasting workload changes and providing comparative data on facilities’ costs. Nonetheless, though VA has understandably focused its efforts in the first years of RPM on improving the system’s data and design, VA has not taken steps to address several barriers that prevent it from acting on the data the system produces. If the system is to live up to its potential, several changes need to be made, including linking resource allocation to VA’s strategic plan, conducting a formal review and evaluation of facility (or VISN) cost variations, evaluating the basis for not allocating funds through RPM, and using RPM to overcome differences in veterans’ access to care. Recommendations We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to link the resource allocation process to the strategic planning process in the VISN structure so that (1) allocations are more clearly associated with VA’s long-range goals, performance standards, and workload priorities; and (2) facility and VISN managers are given short- and long-range financial objectives; institute a formal review and evaluation process within the resource allocation system to examine the reasons for cost variations among facilities and VISNs; establish a process for evaluating non-RPM patient care funds to determine whether they can be included in the RPM allocation system, including exploring options for using existing financial management systems to capture data on the provision of non-RPM allocated funds by facility and program area; and explore options for using existing or improved databases to (1) understand the extent to which veterans within the same priority categories have consistent access to care within the VA health care system and (2) include such data in VA’s resource allocation system to help ensure that veterans have consistent access to care throughout the system. Agency Comments and Our Evaluation The Deputy Under Secretary for Health, the Chief Financial Officer, and other VA officials provided comments on our draft report. They stated that the report represents an accurate and balanced analysis of VA’s past efforts. The Deputy Under Secretary pointed out that VA has recently taken steps to implement changes to the resource allocation process that are consistent with the draft report’s overall recommendations. He also indicated that although equity of access for veterans is a laudable goal, incorporating this goal in the allocation of resources is necessarily complex. More specifically, VA concurred with our recommendation to link the resource allocation system with its strategic plan for its VISN structure and indicated that VISN directors have been charged with formulating long-range VISN plans. VA also concurred with our recommendation to institute a formal review and evaluation process within the RPM system to examine reasons for cost variations among facilities and VISNs, and cited some efforts already in place to begin studying these cost variations. These efforts, such as the analyses of the Chronic Mental Illness Technical Advisory Group, which we describe in the report, represent, in our view, a step in the right direction. Our recommendation for a formal review and evaluation process, however, envisioned a more structured, detailed process using the RPM database and other performance measure databases. Such a process would not only address ways to improve efficient delivery of quality care but also ways to improve the estimates and comparisons made by the resource allocation system. VA also concurred with our recommendation to establish a process for evaluating non-RPM patient care funds to determine whether they can be included in the RPM allocation system. VA indicated that this process had already begun in that criteria for determining when resources should and should not be allocated through the RPM process had been established. VA hoped to include 90 to 95 percent of VA’s health care budget in the RPM allocations system by fiscal year 1997. Because of the large proportion of resources it plans to include in the RPM process, VA stated that the second part of this recommendation—to explore options for capturing data on the non-RPM funds by facility and program area—would be unnecessary. In our view, VA’s plans appear to meet the intent of our recommendation. Nevertheless, there still may be a need to track non-RPM funds by facility or VISN if VA falls short of its stated objectives for including the maximum practical amount of health care funding in RPM. As a result, we have not changed our recommendation. VA concurred with qualifications with our final recommendation that VA explore options for (1) using existing or improved databases to understand the extent to which veterans receive consistent access to care and (2) including such data in the resource allocation process. VA agreed with the need to explore options for improving information about veterans’ access to care. However, VA also stressed that before it knows whether it could use that information to allocate resources, it would first need to define what “consistent access” really means. The agency expressed its commitment to developing that definition, even though it acknowledged that the plan for how it would do so was not fully developed. In VA’s opinion, consistent access to care is complex and not easy to implement fairly so that special populations, such as the homeless and women veterans, are not adversely affected. VA stated that improving access is more fundamental than a database issue. We acknowledge the complexity of access issues and agree that this is more than a database issue. However, we continue to believe that VA should—at a minimum—know the extent to which veterans in the different statutorily determined priority categories are being served in different medical centers and take those categories into consideration in its allocation of resources. As arranged with your staff, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the Secretary of Veterans Affairs, interested congressional committees, and other interested parties. We will also make copies available to others upon request. If you have any questions about this report, please call me or David P. Baine, Director, at (202) 512-7101. Other major contributors to this report included Frank C. Pasquier, Assistant Director; Katherine M. Iritani, Evaluator-in-Charge; Linda Bade, Senior Evaluator; Doug Sanner, Evaluator; and Evan Stoll, Technical Analyst. Scope and Methodology To determine the extent to which VA’s RPM system provides for an equitable distribution of resources among VA facilities, we reviewed two aspects of the system as originally envisioned by VA. First, we determined the extent that it provided for comparable resources for comparable workloads. Second, we assessed the extent it provided for resources so that facilities can serve comparable categories of veterans. To make these determinations, we documented the system design and analyzed the system’s impact on facility budgets. We reviewed documents and discussed the resource allocation system with the Director and analysts of VA’s Boston Development Center. Documents reviewed included the RPM Handbook, RPM Primer, BDC Draft Development Plan, and relevant BDC Newsline newsletters. We interviewed the leaders and some members of various RPM committees, including the RPM Field Oversight Committee, RPM Incentives Subgroup, the RPM Outlier Group, the Reinventing RPM Subcommittee, and the RPM Financial Advisory Committee. We also reviewed committee reports and meeting minutes. We also analyzed various BDC RPM data files to determine the impact of VA decisions on facility budgets. The data reviewed included workload forecasts and allocation amounts related to various decisions occurring as part of the process. We relied on VA analyses of the impact of RAM and RPM on regional allocations over the past decade. We also analyzed veteran eligibility data in VA’s Outpatient File and summarized by the National Center for Veteran Analysis and Statistics in its Summary of Medical Programs to assess the variations among facilities in the provision of care to discretionary categories of veterans. Finally, we tested the sensitivity of the allocations to the accuracy of the cost and workload data feeding the RPM system to determine how coding or other errors in the cost and workload data may affect allocations under the system’s current design. To address the causes of any inequities in the distribution of resources, we interviewed various VA officials, including the Deputy Under Secretary for Health; Associate Deputy Chief Medical Director; Associate Chief Medical Director for Quality Management; Deputy Director, Quality Management Systems Office; Director, Budget Office; Chief, Medical Formulation Branch; Assistant Director for Budget Execution; Chief, Allocation and Control Branch; Chief, Health Resources Management Branch; Acting Director, Strategic Planning and Policy Office; Director and analysts of the Management Sciences Group; and Director and analysts of the National Center for Cost Containment. To address the changes that could help ensure that the system more equitably distributes resources, we reviewed various related studies, including a contractor’s study conducted in 1992and a VA Inspector General report on physician staffing, Audit of Veterans Health Administration Resource Allocation Issues: Physician Staffing Levels. Finally, we reviewed VA strategic plans, including VA’s Blueprint for Quality, the unpublished National Health Care Plan, and the VA Secretary’s plan for restructuring VA, entitled Vision for Change. To understand the RPM system’s impact on the Carl T. Hayden Medical Center in Phoenix, we visited the facility and interviewed key management officials. We also reviewed the report of a Western Region task force looking at the allocations awarded to Carl T. Hayden and met with officials of the Western Regional Office. Our review was limited to the resource allocation process as it operated for the fiscal year 1994 and 1995 allocation process. The VA appropriation for fiscal year 1996 had not been determined at the time of our review. Our review was limited primarily to the process as it was used to allocate and manage facility budgets and did not include a review of other goals, such as how the process is used to formulate VA’s budget. Our review was conducted between December 1994 and October 1995 in accordance with generally accepted government auditing standards. VA’s Resource Planning and Management System The Resource Planning and Management system is the management decision process VA uses to allocate most of its resources and to compare VA medical facilities’ performance. The system provides the information VA uses as a basis for resource allocation by classifying each patient into a clinical care group, calculating the cost per patient, and forecasting future patients. The system also provides comparative data on facility cost per workload unit so that funds can be reallocated from the high- to low-cost facilities. VA Resource Allocation History Each year, VA receives an appropriation to operate its health care system—about $16.2 billion in fiscal year 1995. To finance its medical care system, VA uses what is considered a “global budgeting” system. VA calls it that because of its fixed budget—resources are first approved by the Congress, then allocated to individual facilities for the ensuing fiscal year. VA facility directors are charged with managing their assigned workload targets within their allocated budgets. Before 1985, VA Headquarters developed facility budgets by incrementally adjusting the facilities’ past budgets rather than building the budget based on the facilities’ expected workload. Beginning in fiscal year 1985, VA attempted to modify its incremental budgeting process of making adjustments to historical budgets. This new system, called the Resource Allocation Methodology, was intended to provide a more equitable distribution of available funds by adjusting budgets according to the work produced and its associated cost. RAM tried to match resources to facility workloads by linking allocations to the reported clinical services or procedures performed in each of three areas—acute care, ambulatory care, and long-term care. RAM was suspended in 1990, however, because of concerns about the unintended impact on clinical practice patterns and administrative management of VA medical care. Under RAM, facilities and regions competed with each other for a fixed resource pool. Facilities began acting as if VA had an open-ended reimbursement system—having incentive to perform work beyond their resources—when in reality it was a closed, fixed budget system. This open-ended expansion of workload led to a budget crisis at a number of VA facilities and caused concern about the potential impact on quality of care. After RAM was suspended, VA began moving to a new system—RPM—that would be more prospective and capitation-based—in line with where the private sector was heading. RPM was to be prospective in that the process forecasted workload changes and future facility resource requirements, enabling VA to use the data to formulate its budget. RPM was “capitation like” in that it was designed to consider workload on a per person basis, rather than as procedures performed. This new definition of workload was expected to lessen the incentive to inappropriately provide care. RPM was used for the first time to allocate facility resources for fiscal year 1994. The RPM system differed from RAM in several ways. First, VA envisioned a broad management decision process with RPM that would integrate planning, budgeting, and operational management. VA expected RPM to be used to formulate the Veterans Health Administration’s budget from year to year, to be linked to and driven by VA’s strategic plan, and to be used to review and evaluate facilities’ unit costs. RPM Committee Structure VA’s considerable investment in RPM is reflected in the significant involvement of VA managers, technicians, and physicians from throughout the country serving on RPM committees. In total, several major committees and subcommittees, six technical advisory groups of clinicians generally representing the RPM clinical patient groups, and key VA Headquarters managers have been involved in RPM’s design and implementation. Operationally, the Boston Development Center, a group of about 26 staff with a fiscal year 1995 budget of $3.3 million, is responsible for RPM data processing and education. The RPM development and management structure includes the RPM Subcommittee and Field Oversight Committee and the technical advisory groups, which are responsible for, among other things, incorporating clinical definitions into the RPM system. In addition, the process includes input from each of the four VA regions (replaced by VISNs in fiscal year 1996) and all facilities. While the various RPM committees, subcommittees, and advisory groups make recommendations on how the system should be implemented, the Budget Policy and Review Committee, comprising VA associate chief medical directors and other senior VA managers, makes the final recommendation on RPM methodology, which the Under Secretary for Health approves. Summary of the Process BDC uses a complex data compilation and analysis process to develop data that VA Headquarters and other managers use to determine facility allocations. Key decisions made by RPM committees and approved by Headquarters managers have dictated the final outcome of the facility allocations, as described here and in appendix III. Generally, the RPM budget allocations to the facilities have been driven by the number of (case-mix-adjusted) unique patients expected to be seen and the facility-specific unit cost of providing care. “Unit costs” refer to each facility’s average cost for treating a patient in each of five RPM patient groups. The key steps in the process are as follows: Patient classification: Using clinical information, VA classifies each veteran seen in the base year into one of 49 clinical classes ranked by resource intensity. The patient classes are intended to reflect the kinds of medical care being provided. A patient who qualifies for two or more classes is placed in the most resource-intensive class. Patient (workload) counts: VA counts the unique patients in each class at each facility. Workload forecasts: VA predicts changes to the numbers of patients expected to be seen within each class by applying forecasting methods to historical trend data. Patient costing: Using facility “bed-days of care” provided and other clinical information from VA’s Patient Treatment File, Outpatient File, Patient Assessment File, and other data sources, combined with facility cost data from the Cost Distribution and other cost reports, VA estimates a total cost for each patient. Patient groups: VA groups the patients within each class into one of five major patient groups and calculates an average facility cost per patient within each group. Using the data developed in these steps, VA establishes the facility target budget allocation through a series of calculations. First, average facility costs per patient group are multiplied by the expected numbers of patients to be seen at the facility within each group. These initial facility numbers are then adjusted to reflect marginal costs associated with increased and decreased workload, VA budget constraints, facility efficiencies, inflation, and VA regional input. These adjustments are described in detail in the sections that follow. Marginal Rate Adjustment The RPM process has applied “marginal rates” in calculating the incremental resource needs facilities have given their changing workloads. In other words, marginal rates account for the expected resources needed for seeing one additional or one less patient. VA decided to use marginal rates because of the assumptions that, given the relatively fixed nature of some operating costs such as salaries, workload increases would not have to be funded at the same rate as the base budget workload and that facilities with decreasing numbers of patients could not be expected to reduce their per patient costs at the same rate as their base budget. VA has not determined the true incremental cost per patient, however. Officials indicated they judgmentally chose a 75-percent marginal rate for workload increases and a 50-percent marginal rate for workload decreases to reflect incremental costs associated with workload changes. Adjustment to Accommodate Budget Shortfalls Because VA has not had enough funds to fully cover all of the expected facility costs, VA officials chose to address the shortfall in both fiscal years 1994 and 1995 by applying an “implementation rate” to provide a percentage of the funding that facilities had been expected to get for workload changes. The implementation rate in both fiscal years 1994 and 1995 was 17.36 percent. The impact of the implementation rate, and how it was applied, is more fully discussed in appendix III. Cost-Efficiency Adjustment To measure and provide in the allocations for differences in facility efficiency, the RPM system uses a complex process for comparing like facilities’ costs. Through this process, VA removes funds from the budgets of the “least efficient” facilities (called high outliers) to provide more funds to the “most efficient.” The outlier process involves grouping comparable facilities, adjusting costs to make comparisons more equitable, and developing cost-efficiency and productivity data for facility comparisons and for the outlier process. VA Facility Groupings To compare facility costs, the process first groups facilities considered comparable. The nine medical center groups used in the fiscal year 1995 process were created by merging the hospital groups used for planning purposes and a complexity index. The complexity index is based on a number of variables, including facility size, clinical variety, resident teaching mission, resident programs, allied health training, managerial complexity, and research. VA Cost Adjustments for Facility Comparisons To more fairly compare facility costs per workload, the process adjusts for case mix differences (that is, differences in the types of patients treated at each facility) by developing a standardized workload measure called facility work or facwork. Facwork is an age- and case-mix-adjusted workload measure that recognizes that different classes of patients have different resource intensities. For example, a transplant patient is more resource intensive than a primary care patient. Facwork is calculated solely on costs, recognizes that VA patients may visit more than one facility, and allows workload credit to be shared among facilities. In fiscal year 1995, a cost adjustment process was developed to “level the playing field” by adjusting for facility-specific cost and workload factors in order to make fairer cost comparisons. The costs removed from the facility comparisons included those for resident training, research, geographic pay, and specialized programs. In addition, workload was adjusted for fee and contract programs and for high-cost programs. This process ensured that the costs for a facility that provided extensive resident training, for example, were not used in comparing that facility with others in its group. VA Efficiency Comparisons and Allocation Adjustments Once the cost adjustments were made to provide for fairer comparisons, VA ranked the facilities within each facility group. This ranking and the supporting data were provided to each facility for data validation before the final allocations were made. RPM also produced data showing productivity comparisons, that is, comparisons for facilities within each facility group of the staff level per workload. VA used the resulting cost comparisons in its outlier process to adjust the initial allocations. Through this process, funds from the initial projected budgets of high-cost facilities were removed and added to the budgets of low-cost facilities. The high- and low-outlier facilities were identified based on their differences from the group average. RPM resources were withdrawn for high outliers using a sliding scale of up to 1 percent and added to low outliers at a flat rate of 1.25 percent until the amount that VA officials decided to reallocate was reached. Approximately $10 million was moved between the high- and low-cost outliers in fiscal year 1994, and approximately $20 million was moved in fiscal year 1995. Inflation The inflation adjustment is facility-specific and is based on locality pay adjustments and specific assumptions included in the President’s medical care budget. Inflation rates varied from 4.1 to 16.7 percent in fiscal year 1995 and averaged 6.3 percent. Regional Directors’ Adjustments The four regional directors had the authority to change the initial allocations that BDC produces through its data analyses process; however, we identified few instances in which regional directors actually changed the initial allocation numbers. Regional input to the facility allocation process has been mainly through a $5 million allocation over which each regional director had discretion and for which facilities “negotiated.” The negotiations were considered part of RPM’s management process, which was intended to allow for facility-specific factors not captured in the RPM data. Each regional director developed his or her own criteria for allocating resources, subject to VA Headquarters approval. The criteria and methodologies used by regional directors for their allocation funds varied. For example, one region in the fiscal year 1995 process allocated its $5 million on the basis of facility market share, unit cost differences, and the impact of workload and outlier adjustments. Another region removed allocations for forecasted workload increases from high outliers to create a regional contingency fund. In the fiscal year 1995 negotiation process, 56 percent of the facilities had their dollar base adjusted, with 84 facilities gaining and 10 facilities losing funds. The gains ranged from $2,798 to $1.5 million, and losses ranged from $83,000 to $712,277. See appendix IV for fiscal year 1994 and 1995 RPM adjustments for each facility. RPM Data Sources and Sensitivity to Data Errors The RPM system relies on data from many data sources within VA, including the Cost Distribution Report, Patient Treatment File, Outpatient File, Patient Assessment File, Fee File, Immunology Case Registry, and the Home Dialysis Reporting System. Each facility director is responsible for ensuring the accuracy of patient care workload and cost data, and most facilities have data validation committees responsible for the review of internal controls, data collection procedures, and adherence to reporting instructions, among other things. Once BDC obtains facility data, it merges the basic patient care data sets into its relational databases and produces RPM reports known as the facility “tables.” These tables are distributed to facilities for data validation. We have previously reported concerns about some aspects of VA’s cost and workload system. Specifically, we reported in 1987 that one problem VA had in implementing RAM, RPM’s predecessor, was that unreliable clinical and financial databases limited VA’s ability to establish accurate target allowances for individual facilities. RPM relies less on specific clinical diagnoses coding than RAM because workload is defined as the whole patient and the patient’s associated costs rather than being based on specific clinical diagnoses. Furthermore, RPM includes most facility operating costs in developing patient cost averages and uses each facility’s historical workload costs in developing allocations, reducing the chance that facility cost errors would significantly affect allocations. For example, costs inappropriately allocated to one cost center would result in lower than actual costs being reflected in others. Because RPM captures most patient care costs in calculating patient cost averages, these misallocations would show higher than actual costs for some patient types, but lower than actual costs for others. For these reasons, it appears that potential inaccuracies in the clinical and cost data are less likely to affect facility allocations under RPM than under RAM. Our sensitivity analysis of the RPM facility allocations to workload and cost errors supports this conclusion. Our analysis found that even with potential errors of up to 50 percent in the reported workload levels or patient group costs, the budget allocation for the majority of the facilities would change less than 1.2 percent. The maximum change for any facility under our analysis was a 2.03-percent increase in allocation and a 2.27-percent decrease. We believe that our tests represent extreme error rates and that these changes are far greater than those VA is likely to experience. Process Changes The RPM process has changed significantly from year to year and continues to do so. For example, the fiscal year 1994 facility budgets were developed using per patient average costs for each of the 49 patient class levels; whereas, fiscal year 1995 funding was based on the average patient costs within each of the five patient groups. VA hoped that the move to group costs would reward those facilities that increased the number of low-cost patients. The move to group costs was also intended to eliminate the significant incentive to admit a patient just to obtain funding at the higher valued RPM class. The fiscal year 1996 RPM allocation is expected to shift the funding mechanism from facility-specific patient costs more toward a systemwide capitation rate. For the first time, VA officials told us, they intend to base RPM allocations on a “blended rate” to achieve a balance among national, regional, and local cost considerations. The blended rate may include facility, medical center group, VISN/regional, and national components. The magnitude of blended rates at the facility level depends heavily on the relative weights attributed to each component; for example, the blended rate could be based on 90 percent of each facility’s costs, with the remaining 10 percent based on average national and facility group costs. VA officials indicated that blended rates will eliminate the outlier adjustment process that has been in place for the last two RPM allocations. Under a blended rate, all facilities, rather than only those considered “outlier” facilities, would see their initial budgets change based on the process. The farther the facility lies above or below the mean, the more the facility would lose or gain under the process. Resource allocation within VA could further change with the VISN implementation. A significant goal for the agency under the VISN reorganization is to move to a full capitation system in which a unit of payment is based on the enrollee—for example, a certain fee would be paid per member per month or year of enrollment for a defined package of covered health services. At issue is how soon, given the many barriers to implementing full capitation, VA will be in a position to allocate resources under full capitation. Provisions for Comparable Resources for Comparable Workloads For the last decade, VA has sought through its resource allocation systems to better link resources to workload and depart from its traditional process of basing allocations on historical budgets. Part of the need for this better link stems from the shifting demographics of veterans across the nation. RPM data show that facilities’ per patient costs vary widely, even after adjustments are made to ensure cost comparisons fairly exclude costs that facilities cannot control. The data also show changing facility workloads. VA changes to facility budgets have generally averaged about 1 percent per year through the process. Two key VA decisions account for the limited change: the funding of workload changes was limited and the adjustments from high- to low-cost facilities were limited. This conservative implementation of RPM continues VA’s history of limiting changes to facility budgets from year to year. Changing Veteran Demographics Over the last decade, although the overall veteran population has decreased, veterans have been migrating from northeastern and midwestern states to southeastern and southwestern states. Nationally and in each of the 50 states and the District of Columbia, veteran deaths are expected to outnumber separations from the armed forces. Therefore, the only states expected to have stable numbers of veterans in their populations through the year 2000 are those to which enough veterans migrate to offset deaths of veterans in the states’ existing populations. For example, 60,000 veterans are expected to move to Arizona between 1989 and 2000, offsetting the deaths of veterans already living in that state. Figure III.1 shows projected veteran population change by state, based on Census data, from 1989 to 2000. Decrease 15.1 to 20.9% (11 States) Decrease 10.1 to 15% (22 States) Decrease 5.1 to 10% (10 States) Decrease 1.1 to 5% (5 States) Change ± 1% (2 States) Increase 2.9% (1 State) Facility Variations in Resource Distribution and Workload Changes Per patient facility costs vary significantly among facilities, ranging from less than $800 per patient to over $11,000 per patient. While the basis for allocations is each facility’s historical average cost per patient within each of the five RPM patient groups, the system also provides comparative data to include facilities’ cost efficiency, productivity, and workload changes. A discussion of some of the facility comparisons shown by the RPM system follows. Cost-Efficiency Data Much of the difference in facility per patient costs can be explained by differences in mission, for example, the level of specialized care facilities may be providing. An outpatient clinic, for example, is likely to spend to spend far less per patient than a hospital that provides specialized services such as organ transplants. As discussed in appendix II, to provide for comparative data, the system places facilities into groups with other facilities VA considers comparable. This “grouping” of comparable facilities, along with the classification of patients by clinical type, lessens the range of differences in costs, as shown in figure III.2 for Facility Group 5. Figure III.2: Facility Unadjusted Per Patient Cost Differences, Facility Group 5 Cago (L. Sde) Even after adjusting for facility locality pay and other uncontrollable cost differences, variations among facilities within each of the RPM groups remain, as shown in figure III.3. ) Productivity Data The system also produces data on productivity differences among facilities, as shown, for example, by differences in physicians per standard workload units as well as total staffing per workload unit. Figure III.4 shows an example of these data for Facility Group 5. Cago (L.Sde) Cago (W. Sde) Data on Workload Changes As discussed in appendix I, the system estimates workload changes through its forecasting process. The differences in expected workload for Facility Group 5 are shown in figure III.5. Cago (L. Sde) Cago (W. Sde) The system increases or decreases occur in three areas—forecasted workload changes, the outlier adjustments, and negotiation adjustments. Facility-specific inflation adjustments are also built into the facility budgets. The extent of these changes nationally is shown in figure III.6. Appendix IV contains facility-specific RPM budget adjustments. Facility Budget Changes Under RPM The actual impact of the RPM system on historical facility budgets has been small. RPM-related budget adjustments to the facilities’ fiscal year 1995 budgets generally represented less than 1 percent of the total dollars budgeted. The maximum real loss any facility had because of RPM adjustments was 1 percent. While one facility gained as much as 3.4 percent in uninflated funds through the process, the average gain was also about 1 percent. Key VA Decisions Limiting Facility Budget Change VA’s decisions to limit the budget changes of facilities are reflected in two key ways: the manner in which VA decided to fund workload changes and deal with shortfalls between expected resource needs and the actual funds available, and the amount of money VA decided to reallocate among facilities after comparing their workload costs. Funding of Workload Changes Was Limited Because the RPM system forecasts showed that facilities would need more money than was actually available, VA officials decided to address the shortfall by funding only a proportion of facilities’ expected needs. The implementation rate, however, was applied in a manner to reduce only those funds going for expected workload increases, that is, the costs for workload above and beyond each facility’s historical workload base. So, although facilities were funded at 100 percent of their past budgets, the facilities’ costs for forecasted additional patients were funded at 17.36 percent. Because VA already reduced expected needs to account for marginal costs associated with workload changes, in effect, a facility with a forecasted increase of one patient received a funding increase of 13 percent of its historical per patient costs. VA officials also applied the implementation rate to budgeted costs for workload decreases. This had the effect of limiting the amount of resources a facility lost through the process and of giving more money to facilities with decreasing workloads than they were projected to need. Facilities with forecasted decreases received only a funding reduction of 8.8 percent of their historical patient costs for each patient they were expected to lose. One facility that would have lost over $3 million in fiscal year 1995 because of forecasted workload decreases at the marginal rate lost only about $533,000 after the implementation rate was applied. For fiscal year 1995, all facilities received workload adjustments to reflect forecasted patient changes, with 147 facilities receiving additional funds and 20 facilities receiving less funds. The gains ranged from about $700 to $1.4 million, and the losses ranged from about $80 to $533,000. For fiscal year 1994, 124 facilities received additional funds for workload increases, and 43 received funding reductions for workload decreases. Gains for fiscal year 1994 ranged from about $1,200 to $1.6 million, and losses ranged from $2,300 to $676,300. See appendix IV for facility-specific RPM budget adjustments. VA’s decision to fully fund historical workload and limit workload changes favors the status quo. For example, VA could have treated historical and forecasted workload equally within its fixed budget. By applying an implementation rate to workload changes, rather than the cost of all workload, VA limited the impact of the budget changes that facilities would have faced if funding were available for all workload. The impact of the implementation rate compared with the impact of taking a pro rata share of each facility’s total budget is shown in figure III.7. VA Efficiency Adjustments Limited One of VA’s original visions for the RPM system was to use it to lower unit costs or promote efficiency. Through the adjustment process, VA moves resources from the “least efficient” or high-cost facilities to the “most efficient” or low-cost facilities. Despite wide variations in the workload costs among facilities, VA has limited the reallocation of dollars to promote efficiencies among facilities to a small portion of their overall budgets. Part of the reason for this conservatism is that VA does not have a standard measure for what facilities’ unit costs should be. Furthermore, VA has not determined how other elements of workload, such as the timeliness or quality of care, should be considered. VA officials have chosen to limit the outlier impact on any facility to 1 percent of its historical budget and to limit the total outlier adjustments to $10 million among all facilities in fiscal year 1994 and $20 million in fiscal year 1995. In the most recent outlier process (fiscal year 1995), 35 percent of the facilities had their dollar bases adjusted, with 32 high outliers and 27 low outliers. The gains ranged from about $226,000 to $2 million, and losses ranged from about $100,000 to $1.6 million. Appendix IV contains facility-specific RPM budget adjustments. VA officials indicated that the reallocation of funds through the outlier process is difficult in part because of the lack of a standard within VA for what unit costs should be. Without such a standard, it is unknown whether high-cost facilities do not represent what costs should be or whether low-cost facilities are actually ideally efficient and should not be made inefficient by providing them with more funds. Further complicating the matter is the concern that workload is also subject to differences in quality of care. Facilities may have higher costs because of quality differences rather than simple inefficiencies, for example. VA, as part of its VISN plan, is working to agree upon performance measures that could be used in assessing VISN managers’ performance. Many measures that VA is currently capturing are being considered, such as those measuring patient satisfaction, inpatient and ambulatory quality of care, and financial management and efficiency. However, whether the measures, once agreed upon, will be used in resource allocation decisions is not specified in VA’s VISN plan. Resource Shifts Over the Last Decade Within VA The trend over the last decade within VA—not just the 2 years that RPM has been used to allocate resources—has been to limit the extent facilities experienced budget shifts from year to year. Our 1989 report on VA resource allocation and VA analyses of budget changes over the last decade indicate that resource shifts among facilities and regions have been a small percentage of overall budgets since 1985 when VA first implemented RAM. In August 1989, we reported that the RAM-related efficiency adjustments to facilities’ budgets generally represented less than 2 percent of the total dollars budgeted. The adjustments were small in relation to the facilities’ budgets because VA established a maximum amount that a facility’s budget would be increased or reduced to cushion RAM’s financial impact. We also reported that as facilities incurred expenses during the year, facility directors could request additional funds from regional directors. Thus, the regions served as safety nets to help facilities cope with financial pressures. Had the caps on budget adjustments not been in place, the facilities would have experienced significantly larger gains or reductions as a result of the RAM process. The funds transferred among facilities would have totaled $153.2 million, or 223 percent more than the $47.4 million transferred. VA documentation confirms that the allocations made among VA regions based on the RAM and RPM system data were relatively small, as shown in table III.1. RPM Budget Changes by Region and Facility, Fiscal Years 1994 and 1995 (396,659) (396,659) (1,210,923) (236,000) (1,210,547) (177,411) (248,040) (546,985) (587,000) (546,863) (1,103,528) (287,000) (1,103,571) (769,179) (362,000) (769,610) (52,281) (201,640) (253,921) (62,871) (482,081) (319,000) (481,936) (383,596) (235,000) (383,888) (740,747) (281,000) (740,611) (64,577) (64,577) (10,480) (189,293) (199,772) (74,026) (276,823) (350,850) (665,156) (274,000) (665,418) (712,277) (1,038,620) (83,000) (1,038,776) (69,601) (952,321) (1,021,922) (192) (192) ($4,957,337) (Figure notes on next page) In considering the range of adjustments to facility budgets through RPM reallocations, we did not include the Fort Howard budget increase of 13.4 percent. The Region 1 regional director adjustment for fiscal year 1995 is stated for administrative purposes to be $3.85 million. However, VA officials told us that the facility did not receive the funds for patient care. Instead, the funds were considered a reserve for regional office use. Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.2: Southern Region, Fiscal Year 1995 (253,355) (5,988) (1,063,310) (92,244) (150,162) (150,162) (142,457) (142,457) (533,007) (530,950) (240,527) (Figure notes on next page) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.3: Central Region, Fiscal Year 1995 ($369,263) (237,414) (115,967) (875,633) (731,570) (899,100) (233,715) (427,976) (133,712) (236,382) (62,481) (169,148) (147,056) (7,026) (7,026) (872,118) (258,954) (1,097,960) (701,276) ($1,864,440) (Figure notes on next page) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.4: Western Region, Fiscal Year 1995 Percentage RPM increase/ decrease FY 1993-95 ($200,000) ($5,950) (276,713) (1,632,715) (1,311,519) (49,340) (174,606) (174,606) (27,523) (261,704) (289,227) (99,676) (78) (184,641) (51,031) (219,339) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.5: Eastern Region, Fiscal Year 1994 Percentage RPM increase/ decrease FY 1992-94 (194,807) (194,807) (1,183,748) (586,180) (21,072) (103,064) (231,693) (103,063) (117,175) (117,175) (2,277) (2,277) (46,884) (1,102,019) (837,949) (40,815) (351,962) (192,778) (711,154) (549,160) (292,146) (288,729) (1,022,523) (177,304) (122,047) (219,948) (219,948) (12,535) (10,564) (10,564) ($4,388,399) (Figure notes on next page) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.6: Southern Region, Fiscal Year 1994 Percentage RPM increase/ decrease FY 1992-94 ($8,182) (17,129) (119,370) (281,186) (676,282) (21,794) (21,794) (113,822) (113,822) (108,470) (108,470) (9,064) (Figure notes on next page) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.7: Central Region, Fiscal Year 1994 ($76,259) ($76,259) (50,012) (346,123) (99,423) (851,916) (27,973) (27,973) (61,593) (61,593) (88,298) (88,298) (496,090) (3,846) (17,622) (180,810) (142,810) (87,416) (87,416) (Figure notes on next page) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. Figure IV.8: Western Region, Fiscal Year 1994 (379,315) (250,853) (288,055) (39,594) (1,669,161) (826,179) (656,963) (1,655,702) (60,000) (788,157) (976,897) (364,851) (132,070) (42,354) (54,710) (98,837) (73,187) (283,190) (170,815) (112,846) (83,299) ($3,720,934) ($403,479) (Figure notes on next page) Data represent RPM adjustments prior to inflation. Numbers in parentheses are negative numbers. VA RPM data. RPM’s Provision for Facility Differences in Veterans’ Access to Care One of the ways that VA facilities adjust to resource limitations is by rationing care to veterans. As a result, there are differences in the provision of care to veterans among facilities. Some facilities have adequate resources to provide services to all categories of veterans; whereas, others find they must curtail their services. They do so by limiting the categories of veterans served, the types of services offered, and the conditions for which veterans can receive care. When we reported on these differences in 1993, VA responded that the RPM system—under development at the time—would help overcome these differences. Specifically, VA officials indicated that to address wide variations in veterans’ access to health care systemwide, VA was designing a new resource planning and management process with several objectives, including the elimination of gaps in service to veterans systemwide. The Secretary of VA reiterated in February 1994 correspondence to the Congress that the RPM system would begin to alleviate some of the inconsistencies in veterans’ access to care noted in our report. However, this objective has not been incorporated in the RPM model. Availability of Care Is Uneven The Congress established general priorities for VA to use when providing outpatient care when resources are not available to care for all veterans. VA, in turn, has delegated rationing decisions to its facilities. Each facility independently chooses when and how to ration care. Our 1993 report found that 118 centers reported rationing care and 40 reported no rationing, as shown in figure V.1. Iron Mountain Chicago (Westside) Providence Newington West Haven Montrose Bronx New York Northport, L.I. Brooklyn East Orange Lyons Philadelphia Wilmington Perry Point Baltimore Fort Howard Washington, D.C. (Figure notes on next page) Because of differences in facility rationing practices, veterans’ access to care systemwide is uneven. We found that higher income veterans received care at many facilities, while lower income veterans were turned away at other facilities. Differences in who was served occurred even within the same facility because of rationing. Some facilities that rationed care by medical service or condition sometimes turned away lower income veterans who needed certain types of services and provided care for higher income veterans who needed other services. Complex eligibility categories complicate the determinations of priorities for care as well as the extent that facilities are providing care to various categories of veterans. VA’s priority system considers factors such as the presence and extent of any service-connected disability, the incomes of veterans with nonservice-connected disabilities, and the type and purpose of care needed to determine which eligible veterans receive care within available resources. (An eligible veteran is any person who served on active duty in the uniformed services for the minimum time specified by law and who was discharged, released, or retired under other than dishonorable conditions.) While VA’s systems do not allow us to confirm the extent that the rationing we reported in 1993 still exists, available data indicate that the ability of facilities to provide care to discretionary categories of veterans still varies. VA systems record the numbers of unique patients served by facilities who have traditionally been considered “discretionary,” that is, nonservice-connected, higher-income veterans. These data show that although up to 13 percent of some facilities’ patients were from the discretionary category in fiscal year 1994, other facilities treated none. Related GAO Products VA Decision Support System: Top Management Leadership Critical to Success (GAO/AIMD-95-182, Sept. 29, 1995). VA’s Medical Resource Allocation System (GAO/HEHS-95-252R, Sept. 12, 1995). VA Health Care: Issues Affecting Eligibility Reform (GAO/T-HEHS-95-213, July 19, 1995). VA Health Care: Challenges and Options for the Future (GAO/T-HEHS-95-147, May 9, 1995). VA Health Care: Barriers to VA Managed Care (GAO/HEHS-95-84R, April 20, 1995). Veteran Affairs: Accessibility of Outpatient Care at VA Medical Centers (GAO/T-HRD-93-29, July 21, 1993). VA Health Care: Variabilities in Outpatient Care Eligibility and Rationing Decisions (GAO/HRD-93-106, July 16, 1993). VA Health Care: Veterans’ Efforts to Obtain Outpatient Care From Alternative Sources (GAO/HRD-93-123, June 30, 1993). VA Health Care: Resource Allocation Methodology Has Had Little Impact on Medical Centers’ Budgets (GAO/HRD-89-93, Aug. 18, 1989). VA Health Care: Resource Allocation Methodology Should Improve VA’s Financial Management (GAO/HRD-87-123BR, Aug. 31, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) resource allocation system, focusing on the: (1) extent to which VA resources are distributed equally among VA facilities; and (2) causes of unequal resource allocations among VA health care facilities. GAO found that: (1) the VA resource allocation system enables VA to identify potential inequities in resource allocations and forecast facility workload changes, but VA has made only minimal changes in facilities' funding levels; (2) there is a significant difference between comparable health care facilities' operating costs and patient workloads; (3) VA has not used its resource planning and management system (RPM) to ensure that resources are allocated to facilities within the same priority category; (4) VA excluded over $4 billion of its medical care appropriation from the RPM process during the first 2 years of RPM because it wanted to give VA facilities more time to adjust to the reallocation process and large budget changes; (5) the RPM system does not address veterans' unequal access to outpatient care; and (6) VA plans to reallocate a larger portion of its fiscal year 1996 facility budgets based on the RPM process and implement a decision support system to better compute the costs of specific services provided to each patient.
Background Native Americans living in IHS areas have lower life expectancies than the U.S. population as a whole and face considerably higher mortality rates for some conditions. For Native Americans ages 15 to 44 living in those areas, mortality rates are more than twice those of the general population. Native Americans living in IHS areas have substantially higher rates for diseases such as diabetes. Fatal accidents, suicide, and homicide are also more common among them. Mortality rates for some leading causes of death— such as heart disease, cancer, and chronic lower respiratory diseases—are nearly the same for these Native Americans as for the general population. However, these Native Americans also have substantially lower rates of mortality for other conditions, such as Alzheimer’s disease (see fig. 1 for a summary of key differences in health status indicators between the two groups). IHS Administration In 2004, IHS estimated that its patient population was approximately 1.4 million Native Americans. Area offices oversee the delivery of services and provide guidance and technical support to the area’s facilities. The 12 IHS areas include all or part of 35 states (see fig. 2 for a map of the counties included in the 12 areas). Within the 12 areas, direct care services are generally delivered through IHS-funded hospitals, health centers, and health stations. As of October 2001, which is the most recent year of available data, there were 413 such facilities. These included 49 hospitals that ranged in size from 4 to 156 beds. Nineteen of these hospitals had operating rooms. There were 231 health centers and 133 health stations. These two types of facilities vary in the scope of their services and in their hours of operation. Health centers offer a range of care, including primary care services and at least some ancillary services, such as pharmacy, laboratory, and X-ray, at least 40 hours a week. Health stations offer primary care services and are open fewer than 40 hours a week. Services not available through direct care may be purchased through contracts with outside providers. In most cases, the facility that provides a patient’s direct care services also authorizes payment for contract care services. The use of contract care services varies considerably. For example, in two areas (California and Portland) all hospital-based services are purchased through contract care. In the other 10 areas, some hospital- based services are provided at IHS-funded facilities, while others are purchased through contract care. Tribes have the option of operating their own direct care facilities and contract care programs. As of October 2001, tribes were operating 27 percent of the 49 hospitals and 70 percent of the 364 health centers and health stations. The remaining facilities were federally operated. For fiscal year 2005, approximately 50 percent of the IHS budget was allocated to tribes to deliver services. Services Funded by IHS IHS funds a range of health care services for Native Americans. These services can be organized into three broad categories: primary care, ancillary, and specialty services. Table 1 shows these three categories, as well as the subcategories of services (for example, laboratory and pathology services) within each. The table also provides examples of specific services, whose availability may vary among IHS-funded facilities. Primary care services constitute the first level of health care and are generally the entry point for all other services. Ancillary services can be ordered by either a primary care provider or a specialist. For example, a blood test can be ordered by a primary care provider for an initial health assessment or by an oncologist to test for recurrence of cancer. Specialty services constitute a second level of care and generally address conditions of higher acuity than those addressed by primary care. Eligibility Requirements for Direct and Contract Care Eligibility requirements for direct care and contract care differ. In general, all persons of Native American descent who belong to the Native American community are eligible for direct care at IHS-funded facilities. To be eligible for contract care, a Native American generally must also reside within a federally established contract care area and either (1) reside on a reservation within the area or (2) belong to or maintain close economic and social ties with a tribe based on such a reservation. In most cases, a contract care area consists of the county or counties in which a reservation is located, as well as any counties it borders. Contract care pays for services only when patients are unable to obtain such services through other sources, including Medicare, Medicaid, or private insurance (fig. 3 provides an overview of the eligibility requirements for contract care). The services for which IHS provides contract care must also meet medical priority criteria. Each IHS area office is required to establish medical priorities consistent with guidance published by IHS headquarters (see table 2 for an overview of the guidance). Federally operated facilities must abide by the priorities set by their respective area offices, assign a priority level to each service requested, and fund services in order of priority, as funds permit. Although federally operated facilities are required to pay for all priority I services (emergent/acutely urgent care), facilities may otherwise pay for all or only some of the services in the lowest priority level they fund. Tribally operated facilities have discretion in setting medical priorities. While these facilities must have a priority setting system, they may develop a system that differs from the guidance established by IHS. In addition to meeting eligibility and medical priority requirements, Native Americans must meet certain procedural requirements for services to be paid for through contract care. In particular, individuals who obtain emergency services generally must notify IHS within 72 hours of obtaining the services. IHS headquarters data on denials of payment for contract care are incomplete. However, in fiscal year 2003, patients’ or providers’ failure to comply with two procedural requirements (72-hour notification of emergency services and prior approval of nonemergency services) accounted for at least 16 percent of all reported denials of payment for contract care nationwide. IHS Funding The $2.6 billion that the Congress appropriated for fiscal year 2005 for IHS included funds for direct care, as well as $505 million for contract care services. From the $2.6 billion, IHS also funds public health nursing, scholarships to health professionals, and other functions. In addition to IHS’s federal appropriation, facilities are reimbursed for the services they provide on site by private health insurance and federal health programs, such as Medicare and Medicaid. IHS-funded facilities are allowed to retain reimbursements from private and federal health programs, without an offsetting reduction in their IHS funding, in order to fund health services. In fiscal year 2004, IHS-funded facilities obtained approximately $628 million in reimbursements, with 92 percent collected from Medicare and Medicaid and 8 percent from private insurance. The Availability of Primary Care Depended on Native Americans’ Ability to Access Services at IHS-Funded Facilities The availability of primary care—medical, dental, and vision—services largely depended on the extent to which Native Americans were able to gain access to the services offered at IHS-funded facilities. The 13 facilities we visited generally offered primary care—medical, dental, and vision— services; however, Native Americans’ access to these services was not always assured. Although primary care services were offered, facility and tribal officials identified several factors that affected access to these services, such as wait times between scheduling an appointment and receiving services, travel distances to facilities, and a lack of transportation. Facilities Generally Offered Primary Care Services All 13 facilities we visited offered medical services, such as initial physical examinations for pregnant women and well-baby checkups, while 12 facilities offered dental services, such as oral examinations, cleanings, and sealants. Twelve of 13 facilities offered vision care. Four facilities offered certain primary care services by making arrangements for patients to obtain these services at other locations, including other IHS-funded facilities. The arrangements facilities made for care differed, depending on their relationships with other IHS-funded facilities, the nature of the service, and proximity to other facilities. For example, one clinic routinely referred patients needing eye examinations to an IHS-funded hospital located about 50 miles away with which it had an ongoing relationship. Another facility provided dental services on site to children, pregnant women, and adults with diabetes, while referring all others seeking dental care to other IHS-funded facilities. For vision services, this facility directed patients to a different facility that offered eye examinations for children and adults. Another facility purchased primary care services from private providers for Native Americans who lived 75 miles from that facility. At Some Facilities, Access to Primary Care Was Not Assured due to Lengthy Waits for Certain Services and Limited Transportation At over half of the facilities we visited, facility officials indicated that patients were able to obtain certain primary care services—such as physical examinations and well-baby checkups—often within 3 weeks of calling for an appointment. However, the waiting times between calling for an appointment and receiving services were considerably longer for other primary care services. For example, four facilities reported that patients routinely had to wait more than a month for some types of primary care, which was in excess of standards or goals identified in other federally operated health care service delivery systems. The wait times at the four facilities ranged from 2 to 6 months, with the services cited as requiring lengthy waits being women’s health care, general physicals, and dental care. In some cases, facility officials reported that the demand for services exceeded available appointment slots. For example, facility or tribal officials at 7 of the 13 facilities cited a need to increase dental services in order to keep up with their populations’ demand. Additionally, three facilities indicated that medical care slots made available for same-day appointments were usually filled within 45 minutes of the phone lines being opened. At one of these facilities, 20 to 30 slots were usually filled within 15 to 30 minutes. An official at this facility estimated that it was turning away 25 to 30 patients a day. Officials at 6 of the 13 facilities we visited cited a need to increase the amount of primary care services to meet demand in the service population. Some tribal officials remarked on the demoralizing effect on patients who had difficulty getting appointments. For example, one tribal official noted that rather than remain at the facility all day to see a provider, patients would wait to seek care until their condition became an emergency that required a higher level of treatment. Officials at another facility reported that 21 percent of their maternity patients had three or fewer prenatal care visits, well below the recommended number. Transportation challenges also affected the extent to which access to care was assured for some Native Americans. Of the 10 facilities that provided information on their patient coverage areas—the greatest distance patients traveled to the facility to obtain services—8 reported that some of their patients traveled 60 miles or more one way for care (see table 3). Of these 8 facilities, 3 reported over 90 miles of travel one way to obtain care—a distance in excess of what IHS considers reasonable for primary care services. Two facilities reported having made other arrangements for patients to obtain primary care when travel distances to facilities were particularly long. One facility used contract care funds to pay providers to deliver primary care services to patients who were 75 miles from the facility until funding constraints eliminated this option. Similarly, another facility paid to deliver primary care services to patients more than 25 miles from the facility until funding constraints made it necessary to restrict this option to children and elders. Although long travel distances to reach health care facilities create access problems for rural populations in general, for some Native Americans, a lack of transportation compounded the difficulty of obtaining care. Officials at 9 of the 13 facilities reported that transportation to reach services was a challenge for certain tribal members, due in part to high rates of unemployment and the consequent inability of many members to afford a vehicle or pay for other transportation. While facility officials noted that some transportation programs were offered to tribal members, they did not reach all in need. For example, transportation services in two coverage areas were limited to groups such as the elderly, disabled, individuals experiencing medical emergencies, or members of a particular tribe. Certain Ancillary and Specialty Services Were Generally Offered, but Gaps in Other Services Were Common Certain ancillary and specialty services were not always available to Native Americans, primarily due to gaps in services offered at nearly all of the 13 facilities. We found that certain ancillary and specialty services were offered through direct or contract care by 11 or more of the 13 facilities we visited. However, although outpatient mental health care was offered by all 13 facilities, some reported that demand for services outstripped their capacity. We also identified gaps in certain ancillary and specialty services at the 13 facilities, including services to diagnose and treat conditions that were neither emergent nor acutely urgent. Most facilities that did not offer the services on site lacked the funds to pay for them through contract care. Certain Ancillary and Specialty Services Were Offered, but Access to Some of These Services Was Not Assured Certain ancillary services—laboratory, some diagnostic imaging and testing, pharmacy, and emergency medical transportation—were offered through direct or contract care by 11 or more of the 13 facilities we visited. We also identified four specialty services that were offered by almost all of the facilities (see table 4). In most cases, services were offered on site at the facilities rather than through contract care. For example, 11 or more of the 13 facilities we visited had a laboratory, pharmacy, X-ray machine, electrocardiograph, and mental health counselors on site. Although outpatient mental health care services were offered by all facilities, four facilities reported that demand for mental health care outstripped their capacity. For example, one facility cited a need for two to three times the amount of psychiatric care it was able to offer. An official at another facility commented that the facility was able to provide only crisis-oriented care. Another facility reported that it expected to cut mental health services by 20 percent in fiscal year 2005, as reserves that had previously supported these services had been depleted. Gaps in Ancillary and Specialty Services Were Common for Diagnosis and Treatment of Nonurgent Conditions We found that gaps in ancillary and specialty services were common, occurring at 12 of the 13 facilities. The most frequent gaps were for services aimed at the diagnosis and treatment of medical conditions that caused discomfort, pain, or some degree of disability but that were not emergent or acutely urgent (see table 5). In some cases, services were offered to certain groups but not others. For example, four facilities offered eyeglasses only to children or older adults. In other cases, services were significantly delayed; for example, one facility said that adults could wait as long as 120 days to get approval for eyeglasses. We found significant gaps in both dental and inpatient behavioral health care services offered at IHS-funded facilities or through contract care. Of the five specialty dental services we inquired about, three (cast inlays or crowns, dentures, and orthodontics) were entirely unavailable at most of the facilities. Some facilities offered these services only to certain groups. For example, one facility offered cast inlays and crowns only to children. Inpatient behavioral health care services were either not offered or limited. Six facilities did not offer inpatient mental health care treatment to all patients. Four of these six facilities did not offer inpatient substance abuse treatment to all patients. Moreover, three of the nine facilities that did offer inpatient substance abuse treatment offered only partial services—rehabilitation but not detoxification. Facilities Lacked Staff, Equipment, and Contract Care Funds to Offer Certain Ancillary and Specialty Services Most of the facilities we visited lacked the equipment necessary for certain ancillary services and had few medical specialists on site. Most lacked such diagnostic equipment as mammography machines, CT scanners, MRI scanners, and echocardiographs. Ten facilities, including one hospital, reported having three or fewer types of specialists on site. Most facilities did not regularly refer patients to other IHS-funded facilities for care they could not offer on site. Ancillary and specialty services that were unavailable on site or at other IHS-funded facilities could be obtained only through contract care, which was rationed by 12 of the 13 facilities on the basis of relative medical need. Five facilities reported that they were unable to pay for any services that were not deemed emergent or acutely urgent (services categorized as priority level I services in IHS headquarters’ guidance), and two others paid for only a few additional services, such as cancer screenings. The remaining six facilities paid for varying levels of care beyond the emergent or acutely urgent level, but only one of the six was able to pay for all of the care we inquired about (see app. II). Officials noted that in some cases gaps in services resulted in diagnosis or treatment delays that exacerbated the severity of a patient’s condition and created a need for more intensive treatment. For example, tribal health board members at one facility described the case of an elderly woman who had complained of back pain and was diagnosed with cancer only when one of her legs broke. Tribal representatives at another facility cited the example of a young man whose lung condition was only properly diagnosed when, after months of treatment for pneumonia, he went to an emergency room and was found to have a tumor that killed him 3 weeks later. Officials also noted that as a result of gaps in such specialty services as orthopedics and behavioral health care, some Native Americans were living with painful and debilitating conditions. Service gaps not only varied among facilities, but also varied over time for particular facilities, depending on the demand for contract care. Facility officials said that demand for contract care could affect where they drew the line between services that met medical priority criteria and those that did not. For example, one facility reported that the definition of emergent and acutely urgent services narrowed over the course of the year as contract care funds were depleted. At facilities that reviewed requests for contract care or budgeted for this care on a quarterly, monthly, or weekly basis (as most did), approval of a particular service depended in part on its priority relative to the others that came up for review at the same time. In some cases, patients faced challenges accessing the care that was offered through contract care or at other IHS-funded facilities. At seven facilities, patients had to travel more than 60 miles from the facility to obtain some kinds of specialty care—for example, gastroenterology, cardiology, and high-risk obstetrics—that were available only in larger cities. Access also depended on non-IHS providers’ willingness to provide contract care. Few of the IHS-funded facilities we visited mentioned difficulties arranging contract care. However, 10 of the 15 contract care providers we interviewed, which included health systems, hospitals, and physician groups, reported denials or delays of payment by IHS, and some had terminated or were considering terminating their relationship with IHS as a result. One obstetrician who was owed about $60,000 stopped seeing IHS patients until most of his outstanding bills were paid. Two providers were considering terminating their relationship with IHS-funded facilities. Two other providers reported that physicians in their system or in the area had closed their practices to IHS patients. In some cases, the withdrawal of a single provider may affect patients’ access to care. For example, staff of a physician specialty group that had threatened to stop serving IHS patients said that if it had done so, these patients would have had to travel an additional 75 miles for care, as this group was the only provider of its type in the vicinity that was willing to serve IHS patients. Factors Associated with Variations in Service Availability Included Facility Structure, Location, and Funding From our visits to facilities and interviews with IHS area officials, we found that differences in the availability of services among facilities were associated primarily with three distinct factors: how a facility was structured, where it was located, and the amount of reimbursements and tribal contributions it received. In terms of facility structure, we found differences in the amount and range of services available on site, depending on the type of facility (whether it was a hospital, health center, or health station), its age, and whether it was tribally or federally operated. Facilities located in remote areas faced challenges in recruiting and retaining staff, which reduced the services these facilities were able to offer. Those facilities that received greater amounts of funding from reimbursements or tribes were able to expand service availability by, for example, hiring additional staff. Facility Structure Was Associated with Variations in Service Availability From our visits to facilities, we found that the broader array of on-site services at hospitals compared with health centers increased the overall availability of services (see fig. 4 for the services offered at the hospitals and health centers). While the average number of primary care services offered on site was the same at the hospitals and health centers, the average number of ancillary and specialty services offered on site differed. The hospitals generally offered more types of ancillary services on site— such as mammography—than did the health centers. Three hospitals also offered some specialty services on site—such as some obstetric services— that were not offered on site at the health centers we visited. IHS officials noted that its hospitals are located where service populations are large enough to make it professionally and financially possible to offer more services. Services at hospitals were also offered for more hours per week than were services at other facilities, which resulted in differences in the availability of urgent care. The hospitals had emergency rooms open 24 hours a day and 7 days a week and were available for urgent care services. In contrast, the health centers were generally open from 8:00 a.m. to no later than 5:30 p.m., Monday through Friday. When the health centers were closed, urgent care was generally available at non-IHS facilities. Not all of the health centers paid for nonemergency services provided by these facilities. We found that in general the five newer facilities—those with buildings constructed after 1990—had more space to offer additional types of services to more patients than did the eight older facilities. Officials from the facilities we visited reported that the age of their building was linked to building design, space, and resources, which affected both the range of services facilities offered as well as Native Americans’ access to these services. For example, officials at two of the newer health centers reported that they had more examination rooms than they had had in their old buildings, which allowed one facility to add new specialty providers, see additional patients, and reduce wait times. According to an IHS headquarters official, prior to 1988, IHS-funded facilities were constructed with one examination room per primary care provider. From 1988 to early 2005, the standard number of examination rooms per provider for new construction was two, and as of April 2005, the standard number was two and one half. In addition to the benefits of an improved design and more space, area officials explained that when new buildings are constructed with IHS funds, those facilities generally receive increased funds for staff and equipment, which allows the facilities to provide additional types of services or serve more patients. In addition, the range of services facilities offered depended in part on whether the facilities we visited were tribally or federally operated. Because tribally operated facilities are not required to follow the medical priorities established by IHS for contract care, tribally operated facilities were able to make different judgments about the allocation of the funding. For example, all of the three tribally operated health centers offered eyeglasses. In contrast, only one of the five federal health centers offered eyeglasses—and only to children. Another tribal facility offered some nonemergency ancillary services, such as MRI scans for patients with nonemergent conditions, such as seizures, while the federal facilities generally offered those services only to patients with emergent or acutely urgent conditions. One tribal facility used its flexibility in setting medical priorities to deny certain care that federal facilities are required to offer. Specifically, this facility, which had an emergency room, did not pay for any emergency room services at outside facilities. In contrast, federal facilities are required to pay for emergency room services for patients who require emergency care at a hospital that is not funded by IHS. According to facility and area officials, flexibility in setting medical priorities for contract care helped tribal facilities, especially those with smaller populations, manage available funds. One tribal hospital we visited reported that if the facility were required to offer emergency services through contract care, one catastrophic case could eliminate its entire contract care budget. According to officials, the facility had accrued $3.5 million in unpaid contract care bills when under federal operation. When the tribe took over operations in 1994, it paid portions of this debt for 3 years. The tribe revised its medical priority system in part by restricting emergency care to what is available at the tribally operated facility and expanding coverage of contract care referrals for diagnostic services. In the California area, where all of the facilities are tribally operated and there are no IHS-funded hospitals, contract care budgets for small tribes were sometimes less than $40,000. Area officials reported that facilities with budgets of that size may not guarantee that emergent and acutely urgent care, such as obstetrical deliveries, would be offered. Location Affected the Services Facilities Could Offer Of the 13 facilities we visited, 6 facilities were located in frontier counties and 7 in less remote, nonfrontier counties. Officials from 5 of the 6 facilities in frontier counties cited challenges in recruiting and retaining health care professionals, which affected the services these facilities could offer. Officials from 3 of these facilities reported that a shortage of housing for health care workers on the reservations and in nearby communities contributed to the problem. Area officials added that facilities in isolated areas also lacked educational and recreational opportunities for employees and their families. Facility officials reported such position vacancies as pharmacists, dentists, dental assistants, and X-ray and laboratory technicians. Some of these positions remained vacant for several years. For example, one facility reported that it had taken 8 years to fill a dentist position that became vacant again in December 2004. Facilities located in remote areas also more frequently reported high transportation costs, particularly for emergency medical services, which decreased contract care funds for other services. For example, lacking the needed care on site, three of the six facilities located in remote counties reported having to transport patients by helicopter or airplane to other facilities. Officials at one of those facilities reported paying for 17 to 21 air transports a month at a cost of $6,000 to $7,000 each, which was from 17 percent to 24 percent of the facility’s fiscal year 2004 contract care budget. Another facility also told us that ambulance transport was a significant contract care cost. Service Availability Was Associated with the Amount of Reimbursements and Tribal Contributions Facilities Received At all of the 13 facilities we visited, reimbursements from private health insurance and federal health insurance programs, such as Medicare and Medicaid, were an important source of funding for the services each facility offered. We found that the amount of reimbursements that facilities obtained varied. For the 12 facilities that provided budget information for fiscal year 2004, reimbursements constituted from 7 percent to 58 percent of direct medical care budgets, with the average being 39 percent (fig. 5 shows the proportion of facilities’ direct medical care budgets that came from reimbursements). Facilities with higher reimbursements had additional funds with which they could hire staff, purchase equipment and supplies, and renovate their buildings. For example, a hospital that collected $14.7 million in reimbursements, representing 51 percent of its direct medical care budget, funded 31 percent of its clinical providers and other staff (111 of 361 staff members) with those funds. Facility officials reported that certain circumstances outside of their control affected their ability to obtain reimbursements. Specifically, these circumstances included changes in state Medicaid programs and the nature of the insurance offered by tribes. Changes in state Medicaid programs. Medicaid was the largest source of reimbursements in 10 of the 12 facilities and on average accounted for 65 percent of total reimbursements. While the federal government finances 100 percent of Medicaid services provided to Native Americans at IHS-funded facilities, eligibility and benefits vary among states. Facility officials provided examples of eligibility, benefit, and administrative requirement changes that states have made in their Medicaid programs that have affected facilities’ ability to obtain reimbursements. For example, one state’s Medicaid program used to confer retroactive eligibility for a 3-month period; thus any service provided to a Medicaid- eligible person in the 3 months prior to their enrollment would be paid for by the Medicaid program. As of April 2003, however, the program has reduced retroactive eligibility to the beginning of the month in which eligibility was determined. Nature of insurance offered by tribes. The nature of the insurance offered by different tribes affected the amount of reimbursements available to facilities. For example, four federally operated facilities provided services to tribes with self-insured health plans. Because federally operated IHS-funded facilities are prohibited by law from billing for services covered by self-insured plans offered by tribes, their reimbursements from private health insurance were limited. For example, private health insurance comprised less than 14 percent of total reimbursements for these four facilities. Three other facilities (two tribally operated and one federally operated) that were able to bill tribal health insurance reported collecting approximately 30 percent of total reimbursements from private health insurance. Officials at one federally operated facility also reported that reimbursements were lower when tribal employees chose not to participate in tribal health plans and instead relied entirely on IHS-funded care. In addition to reimbursements, contributions from tribes were a key source of funding for services, as 8 of the 13 facilities we visited reported obtaining tribal contributions. At 6 facilities, tribes supplemented care by providing funds for contract care, pharmaceuticals, and other operating costs. Other facilities benefited from onetime contributions. For example, two tribes used their own funds or obtained grants to build new facilities with additional examination and treatment space that allowed the facilities to offer more services. In addition to direct contributions of funds, some tribes obtained other funds to supplement IHS resources for services such as substance abuse treatment. Officials from 3 of the 8 federally operated facilities reported that tribes did not provide additional funding for services. Facilities Used a Variety of Strategies to Increase the Availability of Services Facilities reported having implemented at least one of six strategies to increase the availability of services funded by IHS. The strategies most commonly used by the 13 facilities we visited included bringing specialists on site to deliver services, improving efforts to obtain reimbursements, and implementing prevention and wellness programs (see table 6 for the strategies and the number of facilities that reported using them). Facilities implemented these strategies in different ways. For example, to improve efforts to obtain reimbursements, four facilities had staff available to help patients apply for eligibility or reimbursement from other programs for which they were eligible. Others negotiated with state Medicaid offices in order to be able to bill for services (see table 7 for a description of how facilities implemented the six strategies). Some of the strategies were not available to, or effective for, every facility. For example, in one area that we visited, the area officials reported that facilities were not able to use contract care funds to bring in specialists unless they could provide assurances that they would be able to pay for all emergent and acutely urgent care with remaining funds. One facility stopped using contract care funds to bring in specialists because of that policy. The effectiveness of another strategy was limited by the willingness of outside providers to negotiate contracts with the facility. For example, four facilities reported that while hospitals generally agreed to offer discounted rates for contract care, physicians were not always willing do so. Officials from two areas that we did not visit also reported that location had an impact on the effectiveness of some strategies. For some facilities in one of those areas, especially those in urban areas, it was difficult to retain billing staff needed to obtain reimbursements, because they could not match the private sector pay scale. Agency Comments and Our Evaluation We provided a draft of this report for comment to the Director of the Indian Health Service. We received written comments from IHS. IHS substantially agreed with the findings and conclusions of our report, but did offer comments regarding examples used in our report, as well as comments on terminology and other technical issues. The full text of IHS’s comments is reprinted in appendix III. IHS questioned certain examples supporting our findings—one about the percentage of patients at one facility that went to the emergency room for delivery without receiving any prenatal care and two other examples about the effects of gaps in services. IHS recommended eliminating those examples if they could not be further substantiated. We reviewed the information supporting the examples. With regard to the level of prenatal care, officials provided new information, which we incorporated into the report. With regard to gaps in services, we determined that the examples provided by tribal officials were consistent with the information about service availability provided by officials at the facilities in question. IHS also provided us with comments on terminology and other technical issues. With regard to terminology, IHS commented on our use of “Native Americans,” “contract care,” and “contract care area,” and requested that these terms be replaced with abbreviations or terms used by IHS. We did not alter our use of terms, but did include footnotes indicating IHS’s terminology. IHS’s technical comments related to funding for new IHS facilities, the effect of income demographics on Medicaid reimbursements, cardiovascular disease death rates for Native Americans, contract care priorities, and differences between IHS hospitals and health centers were incorporated as appropriate. In some cases, we did not make the changes IHS suggested because doing so would result in technical inaccuracies. For example, we did not add “cardiovascular disease” to figure 1 as suggested by IHS because the figure highlights conditions for which mortality rates differ between Native Americans and the general population—and cardiovascular disease mortality rates are virtually the same for both populations. As agreed with your offices, we plan no further distribution of this report until 30 days from its date, unless you publicly announce its contents earlier. At that time, we will send copies of this report to the Director of the Indian Health Service. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or aronovitzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: GAO Methodology for Selecting IHS Areas and Facilities Visited We used a two-tiered approach to selecting facilities for site visits, which included selecting 3 of the 12 Indian Health Service (IHS) areas and then selecting 13 facilities within those 3 areas. In the first tier, we selected 3 of the 12 IHS areas to represent a mix in the size of the population served in the areas, geographic location, health status of Native Americans in the areas, the entities operating the facilities (tribal or federal), and the contract care dollars as a percentage of total clinical care dollars (table 8 compares the selected areas to the range across all 12 areas). In the second tier, we selected facilities within the three areas. Facilities were selected to represent a mix in terms of the type of facility (for example, hospital or health center), whether it was tribally or federally operated, the size of its patient population, and whether the facility was located in a frontier or nonfrontier county (see table 9). The selected sites represent a mix of facility characteristics and populations served both within and across the three areas. Appendix II: GAO Methodology for Selecting Services We conducted semistructured interviews with each of the 13 facilities visited to learn more about the availability of selected services. We selected these services using a two-step process—first, selecting a set of health conditions reported to be prevalent among patients served by the 13 facilities, and second, identifying diagnostic and treatment services that are generally part of the standard course of treatment for each condition. To identify these services, we reviewed clinical standards published by IHS, medical associations, and other public entities, such as the Department of Health and Human Services’ Public Health Service. Table 10 shows the 77 services selected for additional data collection. Appendix III: Comments from the Indian Health Service Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Carolyn Yocom, Assistant Director; Susan Barnidge; Nancy Fasciano; and JoAnn Martinez-Shriver made key contributions to this report.
The Indian Health Service (IHS), located within the Department of Health and Human Services, is responsible for arranging health care services for Native Americans (American Indians and Alaska Natives). IHS services include primary care (medical, dental, and vision); ancillary services, such as laboratory and pharmacy; and specialty care, including services provided by physician specialists. IHS provides some services through direct care at hospitals, health centers, and health stations, which may be federally or tribally operated. When services are not available--that is, both offered and accessible--on site, IHS offers them, as funds permit, through contract care furnished by outside providers. Concerns persist that some Native Americans are experiencing gaps in necessary health care. GAO was asked to examine the availability of (1) primary care services and (2) ancillary and specialty services for Native Americans. Additionally, GAO examined the underlying factors associated with variations in the availability of services and strategies used by facilities to increase service availability. GAO conducted site visits to 13 facilities and interviewed IHS officials from all 12 IHS areas, which cover all or part of 35 states. GAO received written comments from IHS. IHS substantially agreed with the findings and conclusions of this report. The availability of primary care--medical, dental, and vision--services was largely dependent on the extent to which Native Americans living in IHS areas were able to gain access to the services offered at IHS-funded facilities. All of the 13 facilities GAO visited offered medical services, such as physical examinations, while 12 facilities offered dental and 12 facilities offered vision services. However, access to these services was not always assured because of factors such as the amount of waiting time between the call to make an appointment and the delivery of a service, travel distances to facilities, or a lack of transportation. Certain ancillary and specialty services were not always available to the Native Americans served by the 13 facilities, primarily because of gaps in the services offered by the facilities. While some ancillary and specialty services were offered to all patients, GAO also identified gaps in other services, including services to diagnose and treat nonurgent conditions--such as arthritis and knee injuries--specialty dental care, and behavioral health care. Most facilities lacked the staff or equipment to offer these services on site and thus had to purchase them with contract care funds, which were rationed on the basis of relative medical need at 12 of the 13 facilities. Five of the 12 facilities were unable to pay for any contract care services that were not deemed emergent or acutely urgent. GAO identified three distinct factors that were associated with variations in the availability of services, namely a facility's structure, location, and funding from sources other than IHS. A facility's structure was associated with the overall amount and range of services available. For example, hospitals offered a broader array of services on site for more hours per week compared with other facilities. Location was a factor in recruiting and retaining staff for geographically remote facilities and in the cost of certain types of services, most notably transportation. Finally, a facility's funding from two types of sources--reimbursements from private and federal health insurance programs for care offered on site and any tribal contributions made--affected the extent to which the facility was able to offer services. The amount of these funds varied across facilities. Facilities reported using at least one of six strategies to increase the availability of services. These strategies included bringing specialists on site and negotiating discounts for contract care. According to officials, the strategies were not available to, or effective for, every facility. For example, four facilities reported that while hospitals generally offered discounted rates for contract care, physicians were not always willing to do so.
OJJDP Established the Girls Study Group to Assess the Effectiveness of Girls’ Delinquency Programs With an overall goal of developing research that communities need to make sound decisions about how best to prevent and reduce girls’ delinquency, OJJDP established the Girls Study Group (Study Group) in 2004 under a $2.6 million multiyear cooperative agreement with a research institute. OJJDP’s objectives for the group, among others, included identifying effective or promising programs, program elements, and implementation principles (i.e., guidelines for developing programs). Objectives also included developing program models to help inform communities of what works in preventing or reducing girls’ delinquency, identifying gaps in girls’ delinquency research and developing recommendations for future research, and disseminating findings to the girls’ delinquency field about effective or promising programs. To meet OJJDP’s objectives, among other activities, the Study Group identified studies of delinquency programs that specifically targeted girls by reviewing over 1,000 documents in relevant research areas. These included criminological and feminist explanations for girls’ delinquency, patterns of delinquency, and the justice system’s response to girls’ delinquency. As a result, the group identified 61 programs that specifically targeted preventing or responding to girls’ delinquency. Then, the group assessed the methodological quality of the studies of the programs that had been evaluated using a set of criteria developed by DOJ’s Office of Justice Programs (OJP) called What Works to determine whether the studies provided credible evidence that the programs were effective at preventing or responding to girls’ delinquency. The results of the group’s assessment are discussed in the following sections. OJJDP Efforts to Assess Program Effectiveness Were Consistent with Social Science Practices and Standards, and OJJDP Has Taken Action to Enhance Communication about the Study Group with External Stakeholders OJJDP’s effort to assess girls’ delinquency programs through the use of a study group and the group’s methods for assessing studies were consistent with generally accepted social science research practices and standards. In addition, OJJDP’s efforts to involve practitioners in Study Group activities and disseminate findings were also consistent with the internal control standard to communicate with external stakeholders, such as practitioners operating programs. According to OJJDP research and program officials, they formed the Study Group rather than funding individual studies of programs because study groups provide a cost-effective method of gaining an overview of the available research in an issue area. As part of its work, the group collected, reviewed, and analyzed the methodological quality of research on girls’ delinquency programs. The use of such a group, including its review, is an acceptable approach for systematically identifying and reviewing research conducted in a field of study. This review helped consolidate the research and provide information to OJJDP for determining evaluation priorities. Further, we reviewed the criteria the group used to assess the studies and found that they adhere to generally accepted social science standards for evaluation research. We also generally concurred with the group’s assessments of the programs based on these criteria. According to the group’s former principal investigator, the Study Group decided to use OJP’s What Works criteria to ensure that its assessment of program effectiveness would be based on highly rigorous evaluation standards, thus eliminating the potential that a program that may do harm would be endorsed by the group. However, 8 of the 18 experts we interviewed said that the criteria created an unrealistically high standard, which caused the group to overlook potentially promising programs. OJJDP officials stated that despite such concerns, they approved the group’s use of the criteria because of the methodological rigor of the framework and their goal for the group to identify effective programs. In accordance with the internal control standard to communicate with external stakeholders, OJJDP sought to ensure a range of stakeholder perspectives related to girls’ delinquency by requiring that Study Group members possess knowledge and experience with girls’ delinquency and demonstrate expertise in relevant social science disciplines. The initial Study Group, which was convened by the research institute and approved by OJJDP, included 12 academic researchers and 1 practitioner; someone with experience implementing girls’ delinquency programs. However, 11 of the 18 experts we interviewed stated that this composition was imbalanced in favor of academic researchers. In addition, 6 of the 11 said that the composition led the group to focus its efforts on researching theories of girls’ delinquency rather than gathering and disseminating actionable information for practitioners. According to OJJDP research and program officials, they acted to address this issue by adding a second practitioner as a member and involving two other practitioners in study group activities. OJJDP officials stated that they plan to more fully involve practitioners from the beginning when they organize study groups in the future and to include practitioners in the remaining activities of the Study Group, such as presenting successful girls’ delinquency program practices at a national conference. Also, in accordance with the internal control standard, OJJDP and the Study Group have disseminated findings to the research community, practitioners in the girls’ delinquency field, and the public through conference presentations, Web site postings, and published bulletins. The group plans to issue a final report on all of its activities by spring 2010. The Study Group Found No Evidence of Effective Girls’ Delinquency Programs; in Response OJJDP Plans to Assist Programs in Preparing for Evaluations but Could Strengthen Its Plans for Supporting Such Evaluations The Study Group found that few girls’ delinquency programs had been studied and that the available studies lacked conclusive evidence of effective programs; as a result, OJJDP plans to provide technical assistance to help programs be better prepared for evaluations of their effectiveness. However, OJJDP could better address its girls’ delinquency goals by more fully developing plans for supporting such evaluations. In its review, the Study Group found that the majority of the girls’ delinquency programs it identified—44 of the 61—had not been studied by researchers. For the 17 programs that had been studied, the Study Group reported that none of the studies provided conclusive evidence with which to determine whether the programs were effective at preventing or reducing girls’ delinquency. For example, according to the Study Group, the studies provided insufficient evidence of the effectiveness of 11 of the 17 programs because, for instance, the studies involved research designs that could not demonstrate whether any positive outcomes, such as reduced delinquency, were due to program participation rather than other factors. Based on the results of this review, the Study Group reported that among other things, there is a need for additional, methodologically rigorous evaluations of girls’ delinquency programs; training and technical assistance to help programs prepare for evaluations; and funding to support girls’ delinquency programs found to be promising. According to OJJDP officials, in response to the Study Group’s finding about the need to better prepare programs for evaluation, the office plans to work with the group and use the remaining funding from the effort— approximately $300,000—to provide a technical assistance workshop by the end of October 2009. The workshop is intended to help approximately 10 girls’ delinquency programs prepare for evaluation by providing information about how evaluations are designed and conducted and how to collect data that will be useful for program evaluators in assessing outcomes, among other things. In addition, OJJDP officials stated that as a result of the Study Group’s findings, along with feedback they received from members of the girls’ delinquency field, OJJDP plans to issue a solicitation in fiscal year 2010 for funding to support evaluations of girls’ delinquency programs. OJJDP has also reported that the Study Group’s findings are to provide a foundation for moving ahead on a comprehensive program related to girls’ delinquency. However, OJJDP has not developed a plan that is documented, is shared with key stakeholders, and includes specific funding requirements and commitments and time frames for meeting its girls’ delinquency goals. Standard practices for program and project management state that specific desired outcomes or results should be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate projects needed to achieve those results, supporting resources, and milestones. In addition, government internal control standards call for policies and procedures that establish adequate communication with stakeholders as essential for achieving desired program goals. According to OJJDP officials, they have not developed a plan for meeting their girls’ delinquency goals because the office is in transition and is in the process of developing a plan for its juvenile justice programs, but the office is taking steps to address its girls’ delinquency goals, for example, through the technical assistance workshop. Developing a plan for girls’ delinquency would help OJJDP to demonstrate leadership to the girls’ delinquency field by clearly articulating the actions it intends to take to meet its goals and would also help the office to ensure that the goals are met. In our July report, we recommended that to help ensure that OJJDP meets its goals to identify effective or promising girls’ delinquency programs and supports the development of program models, the Administrator of OJJDP develop and document a plan that (1) articulates how the office intends to respond to the findings of the Study Group, (2) includes time frames and specific funding requirements and commitments, and (3) is shared with key stakeholders. OJP agreed with our recommendation and outlined efforts that OJJDP plans to undertake in response to these findings. For example, OJJDP stated that it anticipates publishing its proposed juvenile justice program plan, which is to include how it plans to address girls’ delinquency issues, in the Federal Register to solicit public feedback and comments, which will enable the office to publish a final plan in the Federal Register by the end of the year (December 31, 2009). Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contacts and Acknowledgements For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Mary Catherine Hult, Assistant Director; Kevin Copping; and Katherine Davis. Additionally, key contributors to our July 2009 report include David Alexander, Elizabeth Blair, and Janet Temko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses issues related to girls' delinquency--a topic that has attracted the attention of federal, state, and local policymakers for more than a decade as girls have increasingly become involved in the juvenile justice system. For example, from 1995 through 2005, delinquency caseloads for girls in juvenile justice courts nationwide increased 15 percent while boys' caseloads decreased by 12 percent. More recently, in 2007, 29 percent of juvenile arrests--about 641,000 arrests--involved girls, who accounted for 17 percent of juvenile violent crime arrests and 35 percent of juvenile property crime arrests. Further, research on girls has highlighted that delinquent girls have higher rates of mental health problems than delinquent boys, receive fewer special services, and are more likely to abandon treatment programs. The Office of Juvenile Justice and Delinquency Prevention (OJJDP) is the Department of Justice (DOJ) office charged with providing national leadership, coordination, and resources to prevent and respond to juvenile delinquency and victimization. OJJDP supports states and communities in their efforts to develop and implement effective programs to, among other things, prevent delinquency and intervene after a juvenile has offended. For example, from fiscal years 2007 through 2009, Congress provided OJJDP almost $1.1 billion to use for grants to states, localities, and organizations for a variety of juvenile justice programs, including programs for girls. Also, in support of this mission, the office funds research and program evaluations related to a variety of juvenile justice issues. As programs have been developed at the state and local levels in recent years that specifically target preventing girls' delinquency or intervening after girls have become involved in the juvenile justice system, it is important that agencies providing grants and practitioners operating the programs have information about which of these programs are effective. In this way, agencies can help to ensure that limited federal, state, and local funds are well spent. In general, effectiveness is determined through program evaluations, which are systematic studies conducted to assess how well a program is working--that is, whether a program produced its intended effects. To help ensure that grant funds are being used effectively, you asked us to review OJJDP's efforts related to studying and promoting effective girls' delinquency programs. We issued a report on the results of that review on July 24, 2009. This testimony highlights findings from that report and addresses (1) efforts OJJDP has made to assess the effectiveness of girls' delinquency programs, (2) the extent to which these efforts are consistent with generally accepted social science standards and federal standards to communicate with stakeholders, and (3) the findings from OJJDP's efforts and how the office plans to address the findings. This statement is based on our July report and selected updates made in October 2009. With an overall goal of developing research that communities need to make sound decisions about how best to prevent and reduce girls' delinquency, OJJDP established the Girls Study Group (Study Group) in 2004 under a $2.6 million multiyear cooperative agreement with a research institute. OJJDP's objectives for the group, among others, included identifying effective or promising programs, program elements, and implementation principles (i.e., guidelines for developing programs). Objectives also included developing program models to help inform communities of what works in preventing or reducing girls' delinquency, identifying gaps in girls' delinquency research and developing recommendations for future research, and disseminating findings to the girls' delinquency field about effective or promising programs. OJJDP's effort to assess girls' delinquency programs through the use of a study group and the group's methods for assessing studies were consistent with generally accepted social science research practices and standards. In addition, OJJDP's efforts to involve practitioners in Study Group activities and disseminate findings were also consistent with the internal control standard to communicate with external stakeholders, such as practitioners operating programs. The Study Group found that few girls' delinquency programs had been studied and that the available studies lacked conclusive evidence of effective programs; as a result, OJJDP plans to provide technical assistance to help programs be better prepared for evaluations of their effectiveness. However, OJJDP could better address its girls' delinquency goals by more fully developing plans for supporting such evaluations.
Background Events Leading to Development of the SED Following terrorist attacks against the U.S. embassy in Beirut, Lebanon, in 1983, State began an embassy construction program—known as the Inman program—to protect U.S. personnel. However, State completed only 24 of the 57 planned construction projects, in part due to poor planning, systemic weaknesses in program management, difficulties acquiring sites, schedule delays, cost increases, and subsequent funding limitations. Following the demise of the Inman program in the early 1990s, State initiated very few new construction projects until after the two 1998 embassy bombings in Africa. Following those attacks, the Secure Embassy Construction and Counterterrorism Act of 1999 required State to maintain a list of diplomatic facilities to be scheduled for replacement based on their vulnerability to attack. In response, State initiated the Capital Security Construction Program to construct new, secure facilities overseas. At that time, State determined that diplomatic facilities at more than 180 posts—more than half of U.S. overseas missions—needed to be replaced to meet security standards. In 2016, State reported that from 2000 through 2014, it moved over 30,000 staff into more secure facilities. The Secure Embassy Construction and Counterterrorism Act of 1999 calls for new diplomatic facilities to be sufficiently sized to ensure that all U.S. government personnel at a post are located on a single secure site and that those facilities are set back not less than 100 feet from the site’s perimeter boundary. Before constructing a new embassy, State must certify to Congress that, among other things, the facility incorporates adequate measures for protecting classified information and activities as well as personnel working in the facilities. OBO contracts with architectural and engineering firms (design firms) to develop designs meeting security and other project requirements. These design firms submit their designs for reviews by OBO and DS to ensure conformance with building code and security standards, respectively. DS, in consultation with the Office of the Director of National Intelligence, must certify that the design meets security standards prior to the start of construction. While this certification occurs in the design phase of a project, DS also has other roles in the process, such as participating in site selection, ensuring OBO contractors have necessary security clearances, and ensuring facilities are securely constructed. The SED Approach To address some of the performance problems experienced during the Inman program, OBO implemented reforms to its business processes in structuring the new Capital Security Construction Program. Among the most prominent reforms were the development of the SED to expedite the planning, contract award, design, and construction of new diplomatic compounds and use of the design-build (DB) project delivery method, which combines responsibility for design and construction under a single contract and allows contractors to begin basic construction before the design is fully completed. Initially there were three common SED classes—small, medium, and large—based on the size of a post. For planning purposes, each size had predefined schedules and costs associated with them. The SED itself was a set of documents providing prototypical plans (for a medium SED), specifications, and design criteria, and explaining how to adapt those to a particular site and project. The SED was not a complete design but rather a standardized template for the structural, spatial, and security requirements of a new embassy compound to guide a contractor’s final design. Compound elements described by the SED generally included the main office building; U.S. Marine Security Guards’ living quarters; a warehouse; a utility building; compound access control buildings and perimeter walls; and parking facilities. The SED also allowed for the standardization of building components such as security windows and doors. Figure 1 shows the prototypical facilities defined by the SED. The main office building within the SED was organized around two parallel wings connected by a central lobby. Occasionally site conditions such as size, shape, or topography required deviating from that typical configuration; OBO refers to such projects as SED “derivatives.” From 2001 through 2015, OBO constructed more than 50 embassies using the SED approach. Figure 2 shows examples of both the typical SED and derivative SED office building configurations. In 2006, we reported that OBO had made significant progress constructing new diplomatic compounds using the SED approach. We found that the average time to design and construct the 18 new embassies completed from 1999 to 2005 was about 3 years (36.7 months). This was nearly 3 years faster than embassies built during the Inman era, even though the newer facilities were significantly larger and more complex. We also found that reforms implemented by OBO, including the switch to the SED and the DB contract delivery method, had reduced project completion times, although it was difficult to quantify the effects of any single reform. In 2007, OBO reported that the SED, combined with DB project delivery, was expected to reduce overall delivery time—from site selection to occupancy—by 34 percent. In 2008, State’s Inspector General found that OBO’s continued use of the SED, in conjunction with the DB delivery method, was generally effective. Additionally, State’s Inspector General found that the SED permitted faster certification of project designs and accreditation by DS because the standardized design specifications were fully vetted for conformity to security standards. OBO took some actions to incorporate sustainability principles in the SED to meet federal energy mandates to reduce energy and water consumption. In 2006 OBO committed, in concert with 20 other federal agencies, to seek common strategies for planning, acquiring, siting, designing, building, operating, and maintaining federal facilities in an energy efficient and environmentally sustainable manner. In 2008, OBO established Leadership in Energy and Environmental Design (LEED) certification as a design standard for its SED projects. In 2009, OBO documentation indicates OBO elevated its sustainability requirement for SED projects from LEED Certified to the higher certification level of LEED Silver. The Excellence Approach In 2011, OBO announced a new project approach it termed Design Excellence, intended to deliver embassies that (1) best represent the U.S. government overseas, (2) are functional and secure, (3) incorporate sustainable design and energy efficiency, (4) are cost-effective to operate and maintain, (5) have greater proximity to host-government counterparts and users via more centrally-located urban sites, and (6) better respond to the unique needs and context of specific posts. OBO subsequently phased out the SED as the basis for embassy designs, and according to OBO officials, SED specifications, standards and guidance were incorporated into OBO’s Design Standards and Design Guide. In 2013, OBO renamed its approach “Excellence in Diplomatic Facilities” (Excellence) to convey what OBO officials have said is a more holistic effort to improve every aspect of OBO’s operations, including real estate acquisition, security methods and technologies, cost management, construction management, and facilities management. Table 1 shows 23 new construction contracts awarded since Excellence was approved and through fiscal year 2015 at a total value of $3.67 billion, according to State data. Of these, OBO reports 6 as being Excellence projects; the other projects include projects with certain Excellence features in terms of site, permit, or other requirements; SEDs; and derivative SEDs. OBO officials assert that although Excellence was approved in 2011, OBO never planned to award any Excellence construction contracts until fiscal year 2014. No Excellence projects had been completed as of the end of fiscal year 2016 (September 30, 2016). See appendix IV for a timeline illustrating the history of OBO’s building program from 1998 through 2016. OBO Established Excellence in an Effort to Improve Embassy Delivery Approach Although combining the SED with a DB project delivery method enabled OBO to accelerate the construction of new embassies, concerns raised by various stakeholders about the aesthetics, quality, location, and functionality of SED facilities prompted OBO to take some steps to improve the SED concept and eventually transition to Excellence. These steps included the introduction in 2008 of the design-build with bridging method (bridging), whereby OBO first contracts with a design firm to develop a project-specific, partial design that a construction contractor and its design firm then completes. After a nearly yearlong review, in April 2011 OBO approved a series of recommendations and planned actions to implement Excellence. A significant change announced at this time was OBO’s increased use of design-bid-build (DBB) as another delivery method alongside bridging. Generally under DBB, OBO first solicits and contracts with a design firm to develop a 100-percent design, which is then used to solicit bids from prospective construction contractors. Concerns with SED and Desire to Improve Embassies Motivated OBO’s Shift to Other Approaches OBO First Shifted to Design- Build with Bridging During the SED era, OBO predominately used a DB project delivery method. DB integrates design and construction responsibilities into a single contract. Under this model, the DB contractor is responsible for design and construction and thus bears the risks, such as added cost, for any design problems because it (not OBO) hires the design firm to bring the design to completion. According to industry experts, DB is generally recognized as the best project delivery method for supporting accelerated delivery, in part because the DB contractor may undertake some construction while design is still in progress. Under OBO policy, in the SED approach OBO provided the DB contractor with the SED prototypical design—to include standard site and building plans, technical specifications, design criteria, and instructions for its adaptation for a particular project and contract requirements. The contractor’s design firm would then use the SED documentation to develop a 100-percent design adapted for a site at a particular post, becoming the architect-of-record. According to the AIA, in general, the architect-of-record for a project prepares the bulk of the design and construction drawings and assumes professional responsibility for the design. Although the DB contractor’s design firm completed the project design, OBO’s policy was to hire its own design firm beforehand to conduct project development activities such as due-diligence planning surveys, site studies, and other analyses needed to inform the project’s design. Figure 3 provides an overview of the embassy construction process under OBO’s implementation of DB. According to former senior OBO officials, the OBO Director who had implemented the SED viewed OBO’s mission as needing to build secure embassies as fast as possible and within a fixed budget, given the large number of facilities that State needed to replace. They stated that the then Director’s commitment was that OBO would combine a standardized design with the DB delivery method to speed design and construction and limit the costs to build each embassy to no more than $100 million (based on a large SED). OBO also maintained at that time that the SED generally would take no more than 36 months to build—inclusive of the time for contract acquisition, design, and construction—depending on the post size. According to these former senior OBO officials, those estimates did not always reflect the budget and time needed to build some SED embassies. They also stated that adapting the SED to the unique requirements of some posts—such as a very large consular services operation—was challenging and that the SED did not always account for quality and long-term maintenance and operations cost considerations. In addition, one former OBO official stated that although the emphasis in the SED approach on speed and cost control enabled OBO to promote that it had moved a certain number of people into secure facilities each year, this was an indicator of performance related to a single goal: project delivery. He noted that OBO did not use any performance indicators related to design and construction quality to evaluate the new SED facilities being built. Although the SED approach enabled OBO to accelerate the construction of new embassies intended to meet rigorous new security requirements, some stakeholders raised concerns about the aesthetics, quality, location, and functionality of those facilities. Aesthetics. One design firm we spoke with said there were criticisms that SED embassies were “cookie cutter” facilities that looked like fortresses. In 2010, then U.S. Senator John Kerry and former Secretary of Defense William Cohen—key advocates of Excellence—reported newly constructed embassies were not sending the right message. They described new embassies as cold concrete facilities at a forbidding distance hidden away from city life, with little regard for the local surroundings, undermining U.S. diplomats’ message and mission. They asserted that State was constructing a standardized “embassy in a box,” uniform in appearance, quickly assembled, and fortress-like. Quality. According to some former senior OBO officials, OBO’s emphasis on speed and cost under the SED approach resulted in some poor-quality buildings. According to these officials and one design firm we interviewed, the time and budget pressures sometimes resulted in OBO and its contractors making trade-off decisions such as using less costly and lower-quality building systems or materials. For example, one former official reported that the SED approach resulted in some projects where contractors used lesser-quality exterior stone or metal cladding on building exteriors. In some projects, he indicated that contractors installed heating, ventilation, and air-conditioning systems that were minimally acceptable under the SED but not the best solution for the post’s geographic climate. Location. OBO officials commented that in some cases the 10-acre lot specified by the SED required siting the embassy too far from urban centers where foreign government offices and other embassies are located. This issue also arose in a 2007 report to State entitled The Embassy of the Future. Guided by a commission composed of former U.S. ambassadors, among others, the report recommended that State avoid constructing embassies in locations remote from urban centers. It also noted that although the appearance of embassies as influenced by security requirements deserves careful consideration, their location is of higher importance. Functionality. One former OBO official stated that because the SED was “very complete” as a standardized design concept and was based on a completed embassy in Africa, its design was not always conducive to being site adapted and applied to other regions in the world. For example, the design criteria for the heating and cooling systems generally specified by the SED may not always have been the best for climates that are very hot, cold, or humid. Some design firms we spoke with echoed that assessment, saying that project size, site shape or topography, regional climate, or special post needs could render the SED difficult to apply. OBO’s most recent former Director has also stated that the SED did not always permit OBO to meet posts’ varied needs. Former OBO officials told us that, in some cases, functional elements such as warehouses were eliminated from SED projects to deliver them on time and within budget. Additionally, current OBO officials emphasized a need to acknowledge the frequency of which the scope of SED projects were reduced to keep projects on time and within cost, without a corresponding reduction to schedule and budget. Issues with the SED approach have been documented in past OBO and GAO studies. For example, in 2008, OBO initiated a “look back” study to examine shortcomings with the early SED projects (2001-2007). The study identified deficiencies in newly completed SEDs stemming from building functionality issues, construction flaws, maintenance issues, and de-scoped facilities. Our 2010 review also examined functionality at 22 new embassy compounds where construction began in or after fiscal year 1999 and was completed by September 30, 2009. Officials at 21 of the 22 posts reported that the design of some spaces within their facility did not fully meet their functional needs, with an average of five functionality- related issues per post. We reported that in some cases, functionality challenges resulted in the need to conduct costly follow-on projects after posts occupied the embassy. OBO officials assert that by using the SED approach, a deficiency in one project was effectively built into each active project as it was a standardized approach to project delivery. By comparison, OBO officials assert that by designing each project individually, the Excellence approach provides OBO with the ability to more quickly identify and make changes or improvements from project to project. OBO took some steps to improve the exterior look of embassies prior to adopting the Excellence approach. According to former senior OBO officials we spoke with, OBO recognized there were some legitimate criticisms about the aesthetics and architecture of the embassies built under SED. Those officials indicated that one of OBO’s interim Directors initiated a study to improve future embassy projects such that they better fit in with the streets and spaces around an embassy. These officials cited OBO’s 2011 Embassy Perimeter Improvement Concepts & Design Guidelines as a direct effort to improve the exterior appearance of embassies by using various design techniques and landscaping so that they would look less “fortress-like.” OBO had previously prepared a report, in 2008, that reviewed its embassy construction process and the SED. To address some of the problems found in this report—such as the need to balance SED standardization with unique post conditions—OBO’s then-Director approved the use of DB with bridging in 2008. Generally under this delivery method, OBO first contracts with a design firm (the bridging architect) to develop a project-specific, partial design package (bridging design) that conveys State’s design vision and a higher level of detail for key design requirements. Upon completing a project’s bridging design, OBO’s procedure is to separately contract with a DB contractor to complete the design and build the project. Therefore, unlike the SED, each bridging design is project-specific, customized, and separately contracted to a design firm. According to senior OBO officials, as well as construction contractor and design firm officials, the current extent of design represented by Excellence bridging documents varies by project but generally approximates an overall 35- to 50-percent design. Those officials indicated that bridging designs include multiple design disciplines whereby elements such as architectural design may be developed to a far greater extent than others, such as electrical design. Under this method, the DB contractor and its design firm are responsible for completing the design. Figure 4 provides an overview of the embassy construction process under bridging. In 2009, we reported that by providing more design detail up front, OBO believed bridging would more effectively translate project requirements to contractors, speed the design security certification process, and enable construction to begin sooner. OBO documentation also indicates that OBO believed bridging would better define the desired look and quality for projects than the SED alone could achieve and provide less room for the contractor to make interpretations and change OBO’s vision of the project. However, according to OBO documentation, the effort, cost, and time to produce a contract solicitation with bridging documents can be significantly more than that required for a typical DB contract using the SED. OBO Later Shifted to Excellence In 2009, the American Institute of Architects (AIA) submitted a report to OBO entitled Design for Diplomacy, New Embassies for the 21st Century. Informed by a task force composed of architects, engineers, former ambassadors, staff from the U.S. General Services Administration (GSA), OBO design professionals, and others, the AIA recommended that OBO “adopt Design Excellence as a mandate to advance a new generation of secure, high performance embassies and diplomatic facilities that support the conduct of American diplomacy.” It outlined several actions it viewed as necessary to realize the benefits of design excellence. AIA officials we spoke with said that AIA never expressly advocated that the SED be completely abandoned, because that approach might remain appropriate for some projects. However, these officials noted that as more SED projects were built, AIA’s members (i.e., architects) believed that SEDs were not the optimal choice for most projects, as the standardized design was not always conducive to adapting to different climates, countries, or unique post functions. In April 2010—a year before formally instituting Excellence—OBO released “Guiding Principles of Design Excellence in Diplomatic Facilities” (Guiding Principles) to its Industry Advisory Panel. See appendix V for a summary of these principles. At that time, State also announced OBO’s intent to create the Design Excellence approach, with the goal to produce diplomatic facilities outstanding in all respects, including security, architecture, construction, sustainability, operations and maintenance. OBO’s “Guide to Excellence in Diplomatic Facilities” (Guide to Excellence) released in July 2016, states that the new approach will result in innovative, new American landmarks around the globe. OBO’s establishment of Excellence was informed by a nearly yearlong review—begun in June 2010—by seven internal OBO working groups overseen by a steering committee composed of OBO’s senior managers and chaired by OBO’s then Deputy Director (who later served as OBO’s Director from June 2012 through January 2017). OBO also sought assistance from GSA, which assigned GSA’s Director of Design Excellence to subsequently participate as an external advisor to the Steering Committee. OBO’s working groups were tasked with examining OBO policies and procedures and providing the steering committee with recommendations as to how best to integrate design excellence into all of OBO’s activities. This review resulted in over 60 recommendations and a series of planned actions that were approved in an April 2011 decision memo (Excellence decision memo) as a means to implement Excellence. The review also identified some specific changes to OBO’s processes in the areas of (1) site selection; (2) project delivery method; (3) design standards and guidelines; (4) hiring of outside architectural and engineering design firms; (5) design reviews; and (6) life-cycle cost analysis, among other areas. See appendix VI for a table describing the approaches OBO identified to achieve its goals under the Excellence approach. A significant change under Excellence since 2011 has been OBO’s shift to an increased use of the DBB delivery method alongside bridging. Generally under DBB, OBO first solicits and contracts with a design firm to develop a 100-percent design. Under this method, OBO then uses the completed design to solicit bids from prospective construction contractors. According to OBO documentation, OBO selects a project’s delivery method, either bridging or DBB, based on an evaluation of local context, project complexity, construction factors, and urgency. According to OBO officials, the timing of a construction award (i.e. the planned fiscal year when OBO expects to receive funding to make an award) is also a key determining factor regarding delivery method. Figure 5 provides an overview of the embassy construction process under DBB. OBO’s Greater Design Control Requires Greater Up- Front Resources and Has Cost and Schedule Trade-Offs Changes made under Excellence provide OBO with greater design control, but carry trade-offs. Key elements under the Excellence approach include (1) allotting funding and time for developing custom designs; (2) hiring leading design firms for projects and promoting innovation in design; (3) conducting peer reviews of designs; and (4) using bridging and DBB project delivery (rather than DB). We found that OBO now funds the development of customized designs and provides up to 24 months for front-end design work. OBO also seeks to hire leading U.S. design firms to develop those designs for each project. New design firms OBO has hired for Excellence projects have faced some challenges, and OBO only recently began assessing their performance. OBO also requires design reviews by industry advisors. This shift to more design-focused delivery methods—from DB to bridging and DBB—has design, schedule, and cost trade-offs. OBO’s staff had split opinions regarding the Excellence approach compared to the SED approach. OBO Now Allots Project Funding and up to 24 Additional Months to Develop Excellence Designs OBO’s Excellence approach—using bridging and DBB—represents a new investment to develop innovative, project-specific designs. Previously, the SED approach combined with DB delivery made use of the same standard design, which DB contractors’ design firms would adapt to a specific site. Thus OBO did not contract with design firms to develop customized designs. One senior OBO official said that OBO’s intent under Excellence is to “own the quality of each project” and that contracting for project-specific designs provides control over the design process to avoid what OBO reports were quality issues with some SED projects. OBO reports it has awarded 24 new embassy or consulate design contracts— for either 100-percent designs or partial bridging designs—during fiscal years 2011 through 2015. The first design contract solicited as an Excellence project, according to OBO, was awarded in January 2013 for the new U.S. embassy to be built in Mexico City. Design-related activities include both actual project design, which entails the preparation of plans, drawings, and specifications; and project development, which included due diligence efforts such as boundary, utility, and soil surveys. OBO officials stated that they could not distinctly segregate the cost for project designs from project development costs needed to complete a project’s design under any project delivery method, including DB using the SED. Furthermore, according to OBO officials, because such costs were funded out of a central pot of money during the SED era, they cannot be broken out of those earlier contracts. Table 2 shows the amount of funding OBO has authorized for both project design and development activities from fiscal year 2011 through 2015 at a total value of over $400 million. OBO identified 16 of these 24 project designs as being Excellence projects. OBO’s Excellence approach—using bridging and DBB—also represents a new investment in up-front design time, potentially up to 2 additional years (compared to the SED approach) to develop custom, innovative designs before OBO contracts for construction. OBO maintains that this additional design time is integrated into its projects such that planned construction contract award dates are not affected and remain consistent with OBO’s overall Capital Security Construction Program schedule for constructing new secure facilities. In other words, by starting projects earlier, OBO asserts that it can still meet the Capital Security Construction Program schedule for delivering new embassies. However, because the first set of Excellence construction projects were awarded in fiscal year 2014 and are still in progress, it is currently unknown whether OBO will deliver new facilities consistent with the overall program schedule. OBO’s process under both bridging and DBB is now to award contracts to design firms to produce custom project designs, and OBO planning documentation generally estimates up to 2 years (24 months) for the design process of a generic new embassy. OBO documentation generally indicates that when there is sufficient time available for conducting planning and design activities before awarding a construction contract, OBO is inclined to utilize DBB, which OBO asserts often results in a superior end product. OBO officials cautioned that its generic timeframes are just a starting point, and that every project can encounter unique challenges. Figure 6 shows OBO-generic timelines under the prior SED approach (using DB)—which were established based on the size of a post—in comparison with OBO’s generic schedules for Excellence bridging and DBB projects, the latter two providing up to 24 months of additional design work before construction begins. Design Firms New to OBO Have Faced Adjustment Challenges; OBO Recently Began Assessing Their Performance One of OBO’s Guiding Principles for Excellence is that OBO will hire leading U.S. design firms based on their design achievements and portfolio of work. OBO’s intent in hiring leading design firms is, in part, to promote the innovation of American architecture, engineering, and design disciplines as well as U.S. technology, manufacturing, and product design. According to OBO, selection of these firms is based, in part, on their achievements in the design field and work on projects similar in scale and complexity to an embassy project. OBO’s guidance indicates that material advances and new technologies can result in the delivery of better diplomatic facilities and that OBO must invest in innovation. For most of its design projects, OBO utilizes an Indefinite Delivery/Indefinite Quantity (ID/IQ) contract mechanism—under which five design firms have been hired—to task design firms to develop project designs, when needed. In some instances, OBO prefers to issue project-specific solicitations and contracts for more unique and challenging projects, such as the London and Mexico City embassy designs. Under the SED approach, design firms typically conducted due diligence and project development (i.e., planning activities) for OBO to ensure projects were ready for design and construction by the DB contractors. Under Excellence, OBO contracts with design firms to develop Excellence designs before awarding a construction contract, according to OBO. Four of the five firms hired by OBO under its current ID/IQ are new to embassy construction work. Officials we spoke with—including DS officials, design firms, construction contractors, and former OBO officials—identified some adjustment challenges facing design firms new to OBO that have never designed an embassy before. For example, having to become familiar with State’s unique security requirements for diplomatic facilities is a challenge for those design firms. DS officials reported the firms require a great deal of work to bring them up to speed on State’s security requirements. To mitigate this situation, DS has been conducting “101 Certification Workshops” for new design firms so that they understand DS’s approach to the security certification of new embassy designs. Those officials stated that conducting those workshops and reviewing customized Excellence designs has increased DS’s workload. Newer design firms have also been challenged by such issues as lack of sufficient staff with required security clearances, information systems, or office space to independently and securely perform the contracts, according to DS and design firm officials. Some of the new-to-OBO design firms further indicated that they have contracted—as partners or subcontractors—design firms that have worked on past OBO projects to assist the new-to-OBO firms in navigating State’s standards and process. OBO did not begin conducting performance evaluations of its design firms until recently. Recommendations from the 2011 Excellence decision memo indicated that OBO would (1) measure performance of its designers for Design Excellence and project performance, and (2) use the federal contractor performance reporting system to promote consistency, increase data integrity, and motivate contractor performance. The Federal Acquisition Regulation requires agencies to conduct contractor performance evaluations. Such evaluations are intended to provide essential information regarding whether to award future contracts to these design firms. We found that OBO had not been conducting contract performance evaluations of the design firms contracted to deliver Excellence designs. OBO officials acknowledged they had not been recording performance evaluations. As a result of our inquiry and a subsequent request from State’s contracting office, OBO has trained staff and as of August 2016 had initiated design firm evaluations for six of its Excellence projects. Excellence Approach Includes Design Reviews by Industry Advisors The Excellence approach entails greater involvement by OBO’s industry advisors. In 2012, OBO made changes to its industry advisory body, previously called the Industry Advisory Panel. It is now called the Industry Advisory Group (IAG) and OBO increased the number of members from up to 9 advisors representing industry organizations— such as AIA, Associated General Contractors of America (AGC), and Design-Build Institute of America (DBIA)—to up to 35 members, who must, among other criteria, be members of professional organizations and trade groups involved in property management issues, but who represent the companies employing them (not those organizations or industry groups). OBO officials indicate the change was made to allow OBO to have broader industry representation. In September 2014, OBO established a new policy mandating OBO senior management and industry peer design reviews, called OBO senior management and IAG design reviews. The policy requires that two such design reviews be conducted for each new project and that OBO’s Director will designate three members from its IAG or other adjunct professionals to serve as reviewers. These reviews are intended to assist OBO in making certain that projects are well-conceived and can be realized in an efficient and cost-effective manner. The first design review occurs during the Concept Design Phase. The design firm awarded the design contract must submit three viable concepts that it assesses as achievable within the project budget, according to OBO policy. The design firm must explain the factors that influenced each of the three proposed designs, including any opportunities and constraints. During the concept design review, OBO senior management and IAG panel members may raise and discuss any concerns about the proposed concepts or issues affecting scope, schedule, or cost. After considering the IAG panel’s recommendations, OBO selects one concept to be designed to a greater level of detail. The second set of design reviews, outlined in OBO policy, occurs during the Schematic Design Phase, when the selected concept has been more fully designed. The schematic review examines aspects of the proposed design—though not a final design—against the project’s requirements, approved schedule, and estimated construction contract price. The review examines site context and surroundings; the proposed building systems, including security systems and sustainability features; the exterior and interior design elements and materials; and how the local environment and construction labor may impact the design. The general purpose is to highlight opportunities to strengthen the project before the design progresses further and is completed. Following both OBO senior management and industry concept and schematic design reviews, the contracted design firm makes a presentation to OBO’s Director. OBO’s Director may either approve the proposed schematic design to be used for a project or indicate necessary changes to the design for its subsequent approval. After OBO’s Director has approved the schematic design for a project, the final design must be in keeping with the approved design. Senior OBO officials indicated industry advisory design reviews do not add additional time in OBO’s process, as they occur within the overall time allotted for design. See appendix VII for a figure depicting where IAG design reviews occur within OBO’s overall design process. The first Excellence project to go through the industry design review process was the new U.S. embassy planned for Mexico City. Through April 2016, OBO had conducted a total of 27 industry design reviews on 14 Excellence projects. Figure 7 shows the three design concepts that underwent an IAG design review and the schematic design of the selected concept for the new consulate in Hyderabad, India. Design firms we spoke with had varying views on the utility of OBO’s industry advisory reviews. For example, two design firms reported that although the contractual requirement to develop three design concepts for review and consideration adds some value, making it a formal requirement and holding a structured, peer-reviewed process adds additional time, cost, and work. Another design firm new to OBO found the process valuable, particularly when peer reviewers (members of the IAG design review panel) already have OBO experience and can provide advice on potential embassy design pitfalls. Different Project Delivery Approaches Offer Distinct Design, Schedule, and Cost Trade-Offs OBO’s Guide to Excellence indicates that different delivery methods have design, schedule, and cost implications that must be evaluated relative to the characteristics of each project and that OBO’s Director must approve the delivery method for each project. Since 2011, OBO has generally been using both bridging and DBB as delivery methods to have more control over project designs, according to OBO officials. OBO officials believe that greater design control under Excellence will improve embassies’ appearance in representing the United States, functionality, quality, and operating costs. Table 3 lists various design, schedule, and cost trade-offs inherent in the DB, bridging, and DBB project delivery methods identified by industry studies and experts we interviewed. Construction contractor representatives we spoke with reported seeing these issues play out in the execution of OBO’s Excellence approach. AGC’s representative told us that the more customized designs under Excellence create increased risks for design problems or errors that could result in cost and schedule increases. Two OBO construction contractors we spoke with reported that OBO’s bridging and DBB projects cost more and take longer from start of design to completion when compared with a SED DB project. In part, these contractors said they had found some problems with some aspects of the designs, which took time to resolve with OBO and OBO’s contracted design firms. Those contractors also said OBO’s Excellence projects tend to specify more unique materials or custom-made products, which also adds to construction costs. These contractors also stated that they are tracking more change orders and redesign work on current OBO projects, which further indicates the potential for cost and schedule growth under Excellence. Finally, contractors we spoke with said that while OBO now uses bridging to develop a partial customized design for some OBO projects, their own firms’ design costs for a bridging project will not be lower than a similar SED DB project. They said this is because their own design firms are still responsible for the design and must validate any design information that OBO’s bridging architects develop. OBO Staff Held Split Opinions on the Excellence and SED Approaches Benefits and Challenges of Excellence OBO staff expressed a wide range of opinions in response to our survey request for comments regarding any specific benefits or challenges brought about by Excellence in their specific area of expertise. Some 421 respondents provided comments covering diverse topics that we evaluated and grouped into 20 categories. Staff often held opposing views regarding a wide range of Excellence issues such as developing Excellence standards or procedures, facilitating stakeholder input, and focusing on maintenance and sustainability. Staff providing comments generally provided more negative narratives than positive ones. Table 4 summarizes the results of our analysis. For examples of specific benefits and challenges cited, see appendix III. Comparison of Excellence and SED Approaches We asked OBO staff to characterize Excellence compared with the SED approach in terms of producing diplomatic facilities that are outstanding in all respects, including security, architecture, construction, sustainability, operations and maintenance. Of the 339 staff expressing an opinion, 157 (46 percent) identified the SED as generally more effective, 109 (32 percent) identified Excellence as generally more effective, and 73 (22 percent) believed they were equally effective. We analyzed responses by length of service in OBO. Some 288 staff with 5 years or less experience responded to the question. These respondents were hired around or after the introduction of Excellence in 2011. Most respondents with 5 years or less experience, 204, did not provide an opinion about Excellence or SED. Of the 84 respondents with 5 years or less who did express an opinion, 37 (44 percent) indicated Excellence was more effective, 31 (37 percent) reported SED was more effective, and 16 (19 percent) found both equally effective. On the other hand, many OBO staff with 6 or more years of experience who responded (255 out of 395) offered an opinion. Of the 255 staff with more than 5 years’ experience who had an opinion, 72 (28 percent) indicated Excellence was more effective, 126 (49 percent) reported SED as more effective, and 57 (22 percent) found both equally effective. We also selected offices for further analysis based on size. Of the respondents expressing an opinion to the question from particular offices within OBO (specifically not including those with “No opinion/no basis to judge” or who provided no response), the four largest offices found SED generally more effective. Staff from Facility Management were most closely divided. Of the 62 Facility Management staff who had an opinion, 24 (39 percent) said that the SED program is generally more effective than the Excellence program, while 20 (32 percent) said that Excellence is generally more effective, with the remaining 18 (29 percent) reporting one program as effective as the other. A larger percentage of Construction Management and Design and Engineering staff reported SED as generally more effective. Of the 79 Construction Management staff who had an opinion, 44 (56 percent) said that the SED program is generally more effective, compared to 20 (25 percent) who said Excellence is generally more effective. Of the 58 Design and Engineering staff who had an opinion, 28 (48 percent) said that the SED program is generally more effective, compared to 18 (31 percent) who said Excellence is generally more effective. The greatest division among the large offices was in Security Management, with 21 out of 29 (72 percent) who had an opinion reporting SED was generally more effective than Excellence, compared with 3 out of 29 (10 percent) reporting Excellence was generally more effective. These offices are among those most directly involved in the planning, design, construction, maintenance, and security at U.S. facilities worldwide. Other offices were more supportive of Excellence. Of the 24 respondents from the Office of Project Development and Coordination who had an opinion, 13 (54 percent) reported that Excellence is generally more effective than SED. All remaining offices were more narrowly split, with 35 out of 87 (40 percent) reporting Excellence as more effective than SED and about 32 out of 87 (37 percent) reporting SED more effective. The OBO Front Office firmly supported Excellence as more effective than SED with five of six (83 percent) of respondents who had an opinion saying so. Some 403 staff provided narrative comments comparing SED with Excellence. We categorized the comments, some of which fell into more than one category. The most common positive comment regarding Excellence cited aesthetic or architectural improvements, while the most common negative comment noted higher costs under Excellence compared to SED. The tally for the categories is in table 5 below. Figure 8 lists some selected comments comparing the SED and Excellence approaches. When asked whether Excellence had generally improved the Capital Security Construction Program (i.e., the embassy construction program), OBO staff who responded were more evenly divided. Of the 470 respondents expressing an opinion, 174 (37 percent) generally agreed that Excellence improved the program, while 161 (34 percent) respondents generally disagreed, and 135 (29 percent) neither agreed nor disagreed. OBO Has Established Some Implementation Guidance but Lacks Tools to Assess Performance under Excellence While OBO has established some policies and other guidance to implement Excellence, it lacks tools to fully evaluate the performance of the new approach. OBO continues to document changes in its policies, procedures, standards, and other guidance. In our survey, OBO staff generally were evenly split on the sufficiency of OBO’s efforts in these areas. However, OBO has not defined performance measures specific to Excellence goals at either the strategic or project level, such as greater adaptability to individual locations, functionality, or environmental sustainability. OBO also lacks a centralized database to broadly manage Excellence by enabling, for example, effective reporting on projects’ design and construction costs and schedules. Without performance measures specific to Excellence and sufficient systems to collect and analyze relevant data, OBO will not be able to demonstrate whether the performance of Excellence projects over time justifies the increased emphasis on and investment in their designs. OBO Continues to Document Changes in Its Policies, Procedures, Standards, and Guidance While OBO has created or updated some policies and other guidance to implement Excellence, it has taken more time to do this than OBO estimated in 2011. Key guidance deliverables in OBO’s 2011 Excellence decision memo—identified as “critical elements” by OBO—were to be produced within the first year after Excellence was approved. However, it took more time than OBO estimated to issue some of those key elements. For example, OBO replaced the SED with the new OBO Design Standards in 2013 and released its Guide to Excellence in 2016, despite its initial plan to release these documents within a year of the memo. New or updated policies issued in support of implementing Excellence were also not in place until nearly 2 years or more after Excellence was approved in 2011. For example, in recent years OBO has finalized several policies, such as the following: 2013 Site Selection: This new policy emphasizes criteria for urban sites; attributes of the preferred site include (1) considering American values in promoting a sense of openness, accessibility, and transparency through location; (2) proximity to key host-government facilities, embassies of other countries, and businesses and cultural centers; and (3) an urban setting that provides connectivity to public transportation and infrastructure, making the mission accessible to visitors and clients. 2014 OBO Senior Management and Industry Advisory Group Design Reviews: This new policy requires two reviews by external industry advisors and approval of Excellence designs by OBO’s Director. 2015 OBO Core Project Team: This new policy requires OBO’s Design Manager to be an integrated team member—with OBO’s Project Manager, Project Director, and Construction Executive—to ensure that decisions about a project’s design are integrated at project inception and maintained through project completion. 2016 Architect/Engineer Team Selection: This revision to an existing policy governs the evaluation and award of design contracts to design firms. The revision expands the evaluation panel from at least three to up to seven members, with key changes being the addition of an OBO Director’s designee; a representative with a connection to the post or regional bureau; and an external advisor (a federal employee from another agency). OBO Has Introduced Innovation but Faced Challenges in Implementing Excellence A report commissioned by OBO provided insight into the challenges faced in implementing Excellence. In June 2014, OBO modified an existing task order with one of its design firms to require the firm to participate in a roundtable discussion about Excellence and identify ways Excellence might be more effectively communicated and managed. The design firm subsequently delivered a report, based on the roundtable and its own experiences, with the firm’s findings and recommendations on how to improve Excellence. The roundtable included OBO officials, DS, the Office of Logistics Management, and contractors. The resulting consultant’s report noted that OBO had transformed its design approach, design guidelines, project requirements, and preferred project delivery methods. According to the report, this transformation presented significant opportunities. For example, the introduction of highly regarded design firms new to working with OBO offered tremendous potential for innovation and overall quality of building design and performance. However, the report also identified challenges with Excellence, including the following: There did not appear to be an entity responsible for instituting and managing the significant organizational change that the Excellence approach imposed. The Excellence process had altered the internal practices at OBO and its offices; with every project, its implementation has resulted in nuances in the definitions of the Design Standards that led to variations in their implementation, document submittal requirements, milestone analysis, and security risk assessments. While the new Design Standards were more comprehensive, they were very difficult to navigate, particularly as a Portable Document Format (PDF) file with over 8,000 pages. Senior management reviews and the IAG peer review process extended project schedules, and their purpose appeared unclear. The consultant’s report made numerous recommendations to OBO, including that OBO (1) seek to manage and clarify change internally; (2) assign dedicated staff to be responsible for instituting change; (3) utilize more standardization for project requirements, while acknowledging the recommendation may seem counterintuitive in light of OBO’s move away from the SED; (4) further define the Excellence program to capture the new standards and processes OBO was instituting; and (5) train OBO personnel and modify internal systems and practices to be compatible with the new OBO project delivery methods and design standards. In discussing OBO’s implementation of Excellence with us, senior OBO officials stated that they continue to work to improve OBO’s processes. They noted that the development or updating of OBO policies and procedures takes considerable time because numerous OBO technical offices must weigh in on any needed or proposed changes and that OBO management must then review and approve those changes (or send them back for revisions). Senior OBO officials also maintained that it was difficult to “describe a program at the same time that you are implementing it.” One former senior OBO official stated that OBO’s priority during the transition to Excellence was trying to implement the new program and that they may have lagged in establishing policies and procedures to document changes to OBO’s processes. Survey Respondents Held Mixed Opinions on OBO’s Provision of Guidance Related to Excellence In our survey seeking feedback on Excellence, we asked OBO staff whether they agreed or disagreed with seven statements about OBO’s establishment and communication of strategic direction, policies, and guidance for doing their daily jobs. OBO staff who responded agreed most strongly with statements about the provision of policies and standards for their daily jobs. They divided more narrowly on statements about strategic vision and guidance (see table 6). The largest percent of OBO staff who responded generally agreed with the statement: “Since 2011, OBO has provided clear and comprehensive technical standards and guidelines related to my job.” The largest percent of OBO staff who responded generally disagreed with the statement: “Since 2011, OBO has provided clear and comprehensive strategic or long-term guidance to implement its planning, design, construction, and maintenance approach.” OBO Lacks Performance Measures to Evaluate the Potential Costs and Benefits of Excellence While OBO has established Excellence and taken some steps to implement it, OBO has not established strategic or project-level performance measures to evaluate and communicate the effectiveness of the Excellence approach in delivering embassies under the Capital Security Construction Program. Performance measures are essential tools for managers to evaluate progress toward a program’s objectives. GAO’s Standards for Internal Control in the Federal Government state that agencies’ internal controls should include the establishment and review of performance indicators. Furthermore, State’s Foreign Affairs Manual indicates that State must maintain effective systems of internal controls that incorporate GAO’s internal control standards. In addition, the AIA 2009 report stated that “OBO should be willing to evaluate and explain the benefits of integrating security with design excellence, and the potential benefits to life-cycle costs, design, operations, maintenance, public image, and public diplomacy. OBO’s ability to explain the benefits will require some empirical evidence of claims made for those tangible items such as cost benefit and operations.” Both OBO’s 2011 approval of Excellence and 2016 Guide to Excellence assert that a design excellence program will provide the best value for the U.S. taxpayer. According to GAO’s Business Process Reengineering Assessment Guide, if an agency decides to initiate a reengineering project, it should develop and communicate a compelling business case to customers and stakeholders that supports this decision. Such a business case should contain critical performance measures relating to the organization’s core business processes, such as cost, quality, service, and speed. As an agency completes its process redesign work, the business case should be updated to present a full picture of the benefits, costs, and risks involved in moving to a new process. Without meaningful performance indicators, an agency has no way of knowing if the new process has produced the desired results and whether those results compare favorably or not to the previous process. OBO Has Not Established Strategic Performance Measures Specific to Excellence OBO’s strategic plan does not define how OBO intends to evaluate the performance of the Excellence approach. State’s 2010 press release announcing Excellence and OBO’s 2011 Excellence decision memo both noted that a comprehensive strategic plan was to be implemented in 2011 and would act as a roadmap for developing Excellence policies and procedures. OBO senior officials told us that a 2010 presentation— briefed to the then Secretary—was OBO’s strategic plan for Excellence implementation. The briefing document does not say how Excellence is to be evaluated—one of the functions of a strategic plan—nor does it outline any performance indicators to show how OBO would assess and report on the extent to which Excellence facilities are any more safe, secure, functional, sustainable, or more effective in better supporting U.S. diplomacy than the SED facilities. State’s department-level fiscal year 2014-2017 strategic plan is largely silent on Excellence. Its single Capital Security Construction Program- related performance indicator is the relocation of 6,000 U.S. government employees into more secure and functional facilities by September 30, 2017. OBO used a similar performance indicator under the SED approach. This indicator provides no performance assessment on the extent to which Excellence facilities are any more functional, sustainable, or effective in supporting U.S. diplomacy. Furthermore, the projected target may be low relative to past performance, since 6,000 employees moved by September 2017 equates to an average of 1,500 employees relocated per fiscal year (2014 through 2017). However, State reports that from 2000 through 2014 it moved over 30,000 people into more secure facilities—which equates to an average of over 2,100 people per year (based on actual performance). As a result, it is unclear whether State’s target is an appropriate measure given OBO’s past performance. Excellence is briefly discussed in OBO’s bureau-level Functional Bureau Strategy for fiscal years 2015–2017. It states that OBO is implementing design innovation and that Excellence will introduce improved use of functionality, sustainability, and security for diplomatic facilities. That strategy document includes bureau-level design and construction related performance indicators, among others related to other OBO operations. Those indicators include the following, among others: average duration and cost growth for capital construction design standards are met and updated on an annual basis incorporating lessons learned and other feedback from stakeholders from prior years; and percent of new embassy and consulate compounds designed to achieve LEED Silver certification. While the first set of indicators can quantitatively measure performance and enable OBO to report on the efficiency of project delivery under Excellence (as OBO did previously under the Inman program and under SED), those schedule and cost indicators do not address new aspects of Excellence, such as lower operating costs or better support for U.S. conduct of diplomacy. In addition, the latter two indicators in the list above are, if anything, even less useful for assessing Excellence performance. First, OBO updates its design standards annually and conducts design reviews to ensure that projects meet those standards. Thus, it is unlikely OBO would fail to meet this performance indicator. Second, according to OBO documentation, LEED Silver certification has been an OBO design standard since 2009, before Excellence. Thus, to meet design standards, every Excellence embassy built—with an emphasis on greater sustainability—should be at least LEED Silver, so the indicator should be 100 percent, or very near to it. Furthermore, the LEED indicator assesses only the performance implied by the design itself, not the actual building operations and maintenance performance and whether the actual utility usage and costs are equal to or less than initially estimated in the designs. While no Excellence projects can be evaluated yet, as none have been completed, without additional performance indicators relevant to the goals of the Excellence approach, OBO has no way of knowing if its new process is achieving the desired results. Furthermore, it lacks an important tool for reporting on the Excellence approach to congressional overseers, the public, and other State stakeholders such as other U.S. diplomatic agencies that must help pay some of the costs for constructing and maintaining new embassies. OBO Is Exploring Ways to Better Track and Evaluate Long-Term, Project-Level Performance OBO also lacks post-specific performance measures to track and evaluate the long-term performance of its embassies. According to Office of Management and Budget guidance, more than 80 percent of a building’s total cost over its lifespan can consist of ownership costs such as operations, maintenance, and energy usage. When combined with front-end costs such as design and construction, these costs embody a project’s “life-cycle costs.” OBO has attempted to address long-term operations and maintenance costs on the front-end by, for example, committing to include LEED Silver certification in its design standards since 2009, according to OBO officials. Other sustainability “stretch” initiatives OBO considers desirable (though not required) under Excellence include trying to achieve LEED Platinum certification, increasing use of renewable energy sources, reducing greenhouse gas emissions, and achieving net-zero energy and water consumption on its compounds, whereby enough renewable energy or water is generated to meet a post’s requirements. OBO design standards also require design firms to incorporate operations and maintenance cost analysis into embassy designs through sustainability studies. From 2001 through 2015, in locations where OBO has constructed a new diplomatic compound, OBO reported it has constructed 26 new LEED-certified embassy or consulate office buildings, 19 of which were SEDs. However, despite the additional emphasis now focused on operations and maintenance on the front end during design, OBO has no post-specific performance measures related to operations and maintenance cost performance after a new embassy is constructed. One reason for this is the lack of available or reliable data. OBO officials stated that although some embassies do have utility meters on site, getting data from there back to Washington, D.C., is challenging. While OBO does have a data system in place to capture some operations and maintenance information, such as utility usage, it is dependent upon manual entry of data at each specific post. According to OBO officials, this lowers data reliability, and differences in data entry compliance by posts over time make historical analysis of operations and maintenance costs difficult. Also, while some posts have building-level meters for the main office building, other posts have compound-level utility meters that track data for multiple buildings, making broader data comparisons difficult across posts. OBO is taking some steps to address this situation. According to OBO’s 2016 Guide to Excellence, OBO is in the process of developing project and portfolio operations and maintenance cost assessment procedures to account for these costs over the estimated life of embassies. In 2016 OBO initiated an effort to develop a methodology and process to better assess the full life-cycle cost of its projects. OBO’s July 2016 statement of work on its Life-Cycle Cost Assessment effort shows that OBO intends to develop a methodology and plan to assess the total cost of ownership of projects and facilities, which takes into account the costs of (1) acquisition, (2) design, (3) construction, and (4) operations and maintenance. According to OBO officials, this effort represents a gradual shift in OBO’s orientation, whereby OBO’s portfolio is expected to reflect less emphasis on new construction and greater attention to maintenance, repair, and renovations. Therefore, decisions must be made regarding what metrics should be tracked. OBO is also working with State’s Office of Management, Rightsizing, Policy, and Innovation on a pilot effort—called “MeterNet”—with the intent to install more metering systems on embassy compounds and to transmit performance data back to Washington. Under MeterNet, State intends to automate and improve the collection of data on electrical energy usage (both utility and renewable sources), water usage, and fuel consumption. According to OBO officials, MeterNet should relieve posts of manual data entry and also enable OBO to more accurately monitor, collect, and analyze more reliable data on sustainable energy and water performance. OBO anticipates that this in turn will enable facility managers to manage energy consumption data across State’s facilities, as well as analyze and track energy usage trends over time, such as energy per square foot or overall electricity demand. OBO has also been working with the Department of Energy to address challenges with its existing utility data system. According to OBO officials, OBO has not yet determined how MeterNet will interface with OBO’s existing data systems. The steps OBO has taken to improve monitoring of post-specific operations and maintenance costs are at a very early stage. Until OBO clearly defines a process to assess the performance of its projects after construction and establishes reliable data systems to track and report this performance, OBO will lack an essential tool for determining whether completed projects—whether Excellence or SED facilities—are performing as intended by their designs from either a sustainability (e.g., energy and water usage) or an operating and maintenance cost standpoint. OBO Is Exploring a Centralized Data Solution to Better Manage Projects and Track Cost and Schedule Performance OBO currently lacks easily accessible data to provide overall project management information. During our review, we requested a variety of data related to OBO’s embassy construction projects from January 2001 through September 2015, such as contract award amounts, site acquisition costs, delivery method, completion dates, and other data. However, OBO was unable to easily provide such information. According to OBO officials, while these data did exist and could be retrieved, the data were not available in any centralized data source. Rather, each OBO office maintained separate data relevant to its own operations, and so consolidated and current data to provide overall project information were unavailable. OBO offices consolidate certain project management information in periodic project performance reviews, whereby individual offices and project teams report on cost, schedule, and scope for specific embassy projects. However, while this can facilitate some data retrieval from a specific ongoing project, according to OBO officials it can be difficult and time consuming to find information on older, completed projects for which there is no longer an active project team. We reported on similar data issues in September 2014, when OBO could not provide all of the real property files we requested, at that time also citing lack of centralized data, maintenance by different groups within OBO, and difficulty of retrieval. According to federal internal control standards, quality information is an essential tool for agency management to achieve an agency’s objectives. According to these standards, a process should exist that uses the agency’s objectives and related risks to identify the information requirements needed to achieve the objectives and address the risks. Furthermore, such data should be sufficiently relevant, reliable, and accessible to agency management. Additionally, OBO’s 2011 Excellence decision memo cited the need for a comprehensive information technology platform that would integrate and make available all OBO project information; promote effective review, communication, and decision making; and support the maintenance and operations of completed facilities. No such system existed at OBO in October 2015 at the time of our data request. In response to our request, OBO began assembling a wide range of project management data to fulfill our request as well as to better provide information to Congress. We received these data in the form of a spreadsheet 10 months later in August 2016. OBO officials attributed this delay to the aforementioned difficulty of retrieving historical project data as well as having to address concurrent information requests from Congress and State’s Inspector General. The database we received covers projects from January 2001 through September 2015, includes many elements of the information we requested, and also includes some other information useful to OBO management. According to OBO officials, these data were compiled by OBO office units and project managers based on the latest documentation available. OBO recently established an initiative—termed the Ideal Operational State—to explore long-term ways to centralize and standardize data collection across OBO’s operations. According to OBO officials, this Excellence-related initiative is intended to provide a long-term data solution that will allow for better program management across OBO’s business activities as well as better tracking of project metrics such as cost and schedule performance. The study group tasked with assessing OBO’s current information technology systems and potential market alternatives held a kickoff in May 2016 and, after a series of working sessions and vendor evaluations, recommended a series of actions to OBO’s senior management, including an upgrade and modification of existing OBO management software. OBO management approved action on these recommendations in October 2016. Until OBO develops an effective, centralized data system capturing essential and reliable project management data as well as cost and schedule performance across its project portfolio, not only will OBO management lack a critical tool for consolidating key project data, assessing performance, and guiding strategic oversight and internal control, but it will also be hampered in responding to oversight queries by Congress, GAO, and State’s Inspector General. Conclusions At the heart of OBO’s changes under the Excellence approach is the premise that greater design focus and control will produce more innovative, functional, and sustainable embassies that are just as secure as those built using the SED but that will be more cost efficient to operate and maintain. From fiscal year 2011 through fiscal year 2015, OBO has allotted hundreds of millions of dollars to fund more customized designs rather than applying a standardized design to build new embassies. Though a greater upfront investment in design may yield embassy improvements, it carries with it increased risk to project costs and schedule. While OBO is attempting to manage this risk, without strategic or project-level performance measures specific to the goals of Excellence, OBO cannot fully assess the merits of this new approach. Furthermore, as projects initiated during Excellence’s implementation come to fruition and begin operations, such measures will be essential to any long-term assessment of their performance. Establishing reliable data systems to measure, record, and report on building performance can help OBO management evaluate all costs that occur over a building’s lifespan. Further, centralized project management data are also needed to allow OBO to quantify and assess design and construction costs under Excellence for each project. While OBO has begun efforts to establish such systems, it will take time to complete these initiatives and collect these crucial data. Nevertheless these steps are essential to creating safe and lasting buildings that best represent the United States while ensuring that projects make efficient use of resources and assess the value of shifting to the Excellence approach rather than continuing to use the SED. Recommendations for Executive Action To better assess OBO’s performance, we recommend that the Secretary of State take the following four actions: 1. Determine whether the existing OBO program performance measure and annual target of moving 1,500 people into safe, secure, and functional facilities is still appropriate or needs to be revised. 2. Establish additional performance measures applicable to the new goals of the Excellence approach in support of the Capital Security Construction Program. 3. Finalize the mechanisms OBO will use to better track and evaluate the actual operations and maintenance performance of its buildings— whether Excellence or SED—and document through appropriate policies, procedures, or guidance. 4. Finalize the mechanisms OBO will use to centrally manage project management data (to include project cost and schedule information), currently termed the Ideal Operational State, and document through appropriate policies, procedures, or guidance. Agency Comments We provided a draft of this report to State for comment. State provided technical comments on the draft, which we incorporated as appropriate. State also provided written comments that are reproduced in appendix VIII. In its written comments, State concurred with our four recommendations and described actions planned or under way to address each of them. State said it will 1. Perform a comprehensive evaluation of its performance measure and annual target of moving 1,500 people into safe, secure, and functional facilities and determine whether that target remains appropriate. 2. Develop new metrics applicable to the Excellence approach. 3. Finalize the mechanisms it will use to better track and evaluate the actual operations and maintenance performance of its buildings, stating that this will occur after its life cycle cost analysis methodology project produces its final report. 4. Finalize the mechanisms OBO will use to centrally manage project management data, noting that State expects the ultimate product of this multiyear effort to provide a comprehensive framework for managing project data. We are sending copies of this report to the appropriate congressional committees and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact either Michael J. Courts at (202) 512-8980 or at courtsm@gao.gov or David J. Wise at (202) 512-5731 or at wised@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology This report examines (1) reasons for the Department of State’s (State) shift to its Excellence approach, (2) key elements and trade-offs of the new approach, and (3) the extent to which State has established guidance and tools to implement and evaluate its Excellence approach. To conduct this review, we obtained and analyzed information from State policy, planning, funding, and reporting documents, administrative memos, and select project documentation. We also interviewed officials from State’s Bureau of Overseas Buildings Operations (OBO); Bureau of Diplomatic Security (DS); Office of Management, Rightsizing, Policy, and Innovation; and Office of Acquisitions Management. Within OBO, we spoke with officials from offices responsible for site acquisition, planning, project development, design and engineering, cost management, construction management, facility management, policy and program analysis, and financial management. We also interviewed officials from a variety of architecture and engineering (design) firms and construction contractors that have worked for State. Additionally, we met with experts from several industry groups. In general, we did not review acquisition plans, the complete contracts for each project, or the terms and conditions that could have impacted cost, schedule, and performance of any project. To identify the reasons for State’s shift to its Excellence approach, we analyzed relevant industry studies and OBO assessments from before the introduction of Excellence. We also examined the outputs from OBO’s 2011 Excellence working groups as well as other Excellence documentation, such as OBO’s “Guiding Principles of Design Excellence in Diplomatic Facilities” and OBO’s 2011 memo approving the Excellence approach. Also, because the decision to adopt Excellence was made in 2011—and the work leading up to the decision was undertaken in 2010— we interviewed key former OBO officials with direct experience with OBO’s efforts to improve the Capital Security Construction Program at that time, including some who served on OBO’s management steering committee for Excellence. To examine the key elements and trade-offs of the new approach, we collected and analyzed OBO policy and procedures directives, administrative memos, budget documentation, project authorization documents, design standards, and design-related documentation. We discussed changes in OBO’s process with relevant officials from OBO, DS, and the Office of Acquisitions Management. We also discussed these changes with officials from design firms and construction contractors that had previously worked, or are currently working for State. Furthermore, we consulted industry studies and spoke with experts from industry groups, including the American Institute of Architects, the Associated General Contractors of America, the Bridging Institute of America, and the Design-Build Institute of America to determine the trade-offs inherent to different delivery approaches. To determine the extent to which State has established guidance and tools to implement and evaluate its Excellence approach, we examined changes to OBO’s policies and procedures directives, design standards, standard operating procedures, and other guidance since 2011. We compared these changes to goals and recommendations from OBO’s approval of Excellence and also reviewed an OBO-sponsored study of its implementation progress. Additionally, we reviewed strategic planning documentation, to include State’s strategic plan, OBO’s Functional Bureau Strategy, and State’s Annual Performance Report. We also consulted federal standards for internal control and business process reengineering guidance. We also met with officials from OBO and the Office of Management, Rightsizing, Policy, and Innovation to discuss efforts to evaluate embassy buildings and to improve OBO’s data management. To supplement our findings, we conducted a web-based survey of OBO staff from July 15 through August 12, 2016, soliciting their views on the sufficiency of OBO’s strategic vision, policies, procedures, and technical guidance for the Excellence approach as well as any particular efficiencies or challenges brought about by the Excellence approach. This survey was sent to 1,511 OBO staff, 705 (47 percent) of whom responded. We do not make any attempt to extrapolate our findings to the remaining 53 percent of eligible employees who chose not to complete our survey. The results of our survey provide measures of employees’ views at the time they completed the survey in July and August 2016. Because we surveyed all OBO staff, the survey did not involve sampling errors. To minimize nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with six OBO employees. To ensure that we obtained a variety of perspectives on our survey, we randomly selected one employee from each of the following offices to pretest the survey: Area Management; Construction, Facility, and Security Management; Design and Engineering; Planning and Real Estate; Program Development, Coordination and Support; and Security Management. Based on feedback from these pretests, we revised the survey in order to improve the clarity of the questions. An independent survey specialist within GAO also reviewed a draft of the questionnaire prior to its administration. To reduce nonresponse, another source of nonsampling error, we followed up by e-mail with employees who had not responded to the survey to encourage them to complete it. To analyze open-ended comments provided by those responding to the survey, we conducted a content analysis in two steps. In the first step, analysts read the comments and jointly developed categories for the responses. In the second step, each open-ended response was coded by one analyst, and then those codes were verified by another analyst. Any coding discrepancies were resolved by the analysts agreeing on what the codes should be. Additionally, many comments touched upon findings we developed through our separate audit work. We have included some of these comments for illustrative purposes in appendix III. Respondents generally provided more negative comments than positive ones; however, where possible, we have tried to present a balanced set of positive and negative comments. In some cases, we edited responses for clarity or grammar. Views expressed in the survey may not be representative of all OBO staff views on given topics. We conducted this performance audit from August 2015 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of GAO’s Survey of Staff of the Department of State’s Bureau of Overseas Buildings Operations We conducted a web-based survey of the Department of State’s (State) Bureau of Overseas Buildings Operations (OBO) staff from July 15 through August 12, 2016, soliciting their views on the sufficiency of OBO’s strategic vision, policies, procedures, and technical guidance for the Excellence approach to the design and construction of U.S. embassies and consulates, as well as any particular efficiencies or challenges brought about by the approach. We sent the survey to 1,511 OBO staff, 705 (47 percent) of whom responded. We do not make any attempt to extrapolate the findings to the remaining 53 percent of eligible employees who chose not to complete our survey. The results of our survey provide measures of employees’ views at the time they completed the survey in July and August 2016. The questions we asked in our survey are shown below. Our survey comprised both fixed-choice and open-ended questions. In this appendix, we include all survey questions and aggregate results of responses to the fixed-choice questions and the number of responses provided to the open-ended questions. We do not provide text of the responses to the open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. For our summary analysis and selected examples of comments provided in response to open-ended questions, see appendix III. Numeric Responses to GAO’s Survey of State’s Embassy Construction Program 1. About how long have you worked with OBO? Less than 1 year 1 to 5 years 6 to 10 years 11 to 20 years More than 20 years Provided no answer to this question 76 respondents 218 respondents 150 respondents 170 respondents 89 respondents 2 respondents 2. Which of the following categories best describes your position? Foreign Service PSC Other: _______________________ Provided no answer to this question 264 respondents 214 respondents 134 respondents 86 respondents 7 respondents 3. How many new NEC/NCC projects have you worked on or supported as an OBO employee? Please do not include annexes or where you were covering for someone else. None 127 respondents 162 respondents 100 respondents 1 to 3 72 respondents 239 respondents 4 to 6 5 respondents 7 to 12 More than 12 Provided no answer to this question 4. Where is your current posting? Headquarters (SA-6) Overseas Provided no answer to this question 501 respondents 196 respondents 8 respondents 5. In which OBO office do you currently work? Operations (OPS): Area Management Fire Protection Safety Health and Environmental Resource Management (RM): Financial Management Office of the Executive Director Policy and Program Analysis 8 respondents 25 respondents 5 respondents Other: Other: ____________________________ 52 respondents 42 respondents 3 respondents I’d rather not identify my office Provided no answer to this question 6. Do you agree or disagree with the following statements about the program direction of OBO’s construction program? What perspectives or specific examples can you provide to illustrate your answers? 7. Since 2011, what, if any, benefits or efficiencies—related to how OBO now plans, designs, constructs, and maintains NECs/NCCs—have been introduced within your specific area of expertise (i.e., site acquisition, planning, cost estimating, project management, scheduling, design, construction, security management, and facility management)? We received 373 written responses to this question. 8. Since 2011, what, if any, challenges or inefficiencies—related to how OBO now plans, designs, constructs, and maintains NECs/NCCs—have been introduced within your specific area of expertise (i.e., site acquisition, planning, cost estimating, project management, scheduling, design, construction, security management, and facility management)? We received 387 written responses to this question. 9. To the extent you have knowledge, how would you characterize the Excellence program (roughly 2011 to present) as compared to the SED program (roughly 2001 to 2011) in terms of producing diplomatic facilities that are outstanding in all respects, including security, architecture, construction, sustainability, operations and maintenance? 1. The Excellence program is generally more effective than the SED program. 2. The SED program is generally more effective than the Excellence program. 3. The Excellence program is generally as effective as the SED program. 4. No opinion/no basis to judge Provided no responses to this question 9a. What specific examples regarding security, architecture, construction, sustainability, or operations and maintenance can you provide to illustrate your answer? We received 403 written responses to this question. 10. What additional information, if any, would you like to share in order to further elaborate on any of the responses you provided above? We received 338 written responses to this question. Appendix III: Selected Comments from Our Survey of Bureau of Overseas Buildings Operations Staff, with Summary Tabular Analyses Of the 705 respondents, 550 provided comments in response to at least one open-ended question in our survey of Department of State Bureau of Overseas Buildings Operations (OBO) staff. For specific questions, we analyzed and categorized respondents’ comments and have reproduced selected comments below to characterize the results of that analysis. In addition, since many of the comments touched upon findings we developed through our separate audit work, we have also included some of those comments for illustrative purposes. Respondents generally provided more negative comments than positive ones; however, where possible, we have tried to present a balanced selection of positive and negative comments. In some cases, we edited responses for clarity or grammar. Views expressed in the survey may not be representative of all OBO staff views on given topics. Table 9 summarizes the results of our analysis to categorize comments expressing opinions on the benefits and challenges of OBO’s Excellence approach to the design and construction of new embassies and consulates. The two text boxes that follow contain selected narrative responses on, respectively, the top four most-cited benefits and the top four most-cited challenges of the Excellence approach. Selected Survey Comments on the Top Four Most-Cited Benefits of the Excellence Approach Development and/or improvement of standards, processes, procedures, templates, documents, etc.: The 2016 OBO Design Standards have greatly improved the design process. We’ve started doing constructability reviews during design, which is helpful at making sure the designers are realistic in what they propose and that projects can be implemented in the specific region. Improved coordination; facilitating input from different stakeholders/internal teams; consensus and leadership around program objectives: The different offices in OBO seem to work more closely together through the life of the project. There is still some “stove-piping,” but it generally works better than previous years. Seems to be better coordination between OBO and the embassies and consulates. Increased focus on maintenance/sustainability/life cycle analyses: OBO has recently directed its attention to the large role Facility Management undertakes during and after the construction of [an embassy]. It takes OBO 3 to 5 years to get a constructed, but OBO Facility Management has the responsibility for operations and maintenance for following 50 years of life. Facility Management now has a role in design and is working toward obtaining a meaningful role in the construction phase. Operations and Maintenance (O&M) are finally being brought into the full realm of planning, design, and construction of new facilities by current management. The greatest expense to the is not in the planning or the design or the construction of new facilities, it is in the O&M of facilities over their 50-year average life cycle. OBO must bring greater focus as an organization on not just design excellence but on the “Total Ownership Cost” by incorporating the facility management experts and into every single facet in order to minimize the O&M costs to the taxpayer over that 50-year life cycle. This will require resources, not currently at hand. Greater control/flexibility of designs and site selection: My understanding is that previous to 2011 most projects were SED design-build. While this is a very efficient delivery method for some projects, it is not necessarily the best fit for every project. I believe that the Excellence Initiative provides the flexibility to review and select the most appropriate design and delivery method for each project taking into account the unique budget, schedule, and site-specific parameters each project has. Since 2011, we have co-led the effort to search for and legitimize smaller sites in downtown or urban locations. Prior to 2011, the practice was to find overscaled sites outside of the urban core. The Excellence initiative allows for greater flexibility and customization to a specific site (relative to SED), allowing for greater efficiencies. Those efficiencies can, and often do, lead to reduced construction and operational costs. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Selected Survey Comments on the Top Four Most-Cited Challenges of the Excellence Approach Lack of, inadequate, or inconsistent application of policies/procedures/standards/systems; uncertain impact of new policies, etc.: The transition from “design-build” back to “design-bid-build” construction has been poorly implemented. Project design documents have not fully achieved the transition to the level of quality and detail required for overseas “design-bid-build” construction. The effort to move from standard design documents has further made achieving the level of quality for construction and security more difficult. Clear, applicable, specific, and enforceable standards have been watered down or replaced with less specific and in some cases tentative suggestions. Senior OBO management has difficulty objectively articulating design excellence goals or even attempting to measure results. Contract performance in particular is difficult to measure or, in some cases not obtained due to lack of quantifiable criteria. New/slow/problematic processes and/or requirements resulting from more complex and varied projects: Challenges have been introduced in realizing design intent in most of the underdeveloped countries that construction occurs in. Designs are complex and the materials exotic for the location. Inefficiencies have been introduced in requiring senior management reviews of projects without fully defining what the intent of such reviews are. Challenges have been introduced in the constructability of building features. Challenges introduced with reliance on outside architect firms to develop plans and drawings and to judge design intent or answer questions. My required time spent on projects has at least doubled because of the policies put in place since 2011. In terms of improvements, more aesthetically pleasing facilities are being produced, but in my opinion the amount of effort required to attain good design is very disproportionate to the effort required to achieve this goal. This is due to a poorly organized process and a severe lack of communication throughout the organization. Schedule challenges; extended timelines: Schedules are consistently moved forward with longer timelines to accomplish the design portion of the project. Planning and design take much more time and effort than before. Assurance of meeting security criteria is challenging, since every new embassy is a “one of a kind” project. More daunting is the process by which the Front Office approves design concepts. That alone has added 3 to 6 months to the planning and design schedule. Budget challenges; high costs: Since no two posts have the same size, plan, finish materials, exterior components, or operating systems, I see no efficiencies whatsoever. The push for high initial budgets is to be able to cover the costs associated with the fancier design elements. Site acquisition is more difficult, and more costly than ever. Only urban locations and not cheaper bigger suburban sites are even considered. Project design costs are exponentially higher through the Excellence program. Higher costs mean fewer projects being planned for or constructed in any given year. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Excellence costs more compared with SED (“budget challenges”) Excellence takes more time compared with SED (“schedule challenges”) Excellence introduces new/ slow/ problematic processes and/or requirements resulting from more complex and varied projects Under Excellence (compared with SED): Lack of, inadequate, or inconsistent application of policies/ procedures/ standards/ systems; poor communication of policies, etc. Legend: — = No contrasting comments w ere identified in this category. Selected Survey Comments Regarding the SED and Excellence Approaches The SED model streamlined many processes which apparently translated into a more expedient overall delivery of new facilities. The Excellence program successfully addresses many of the SED program drawbacks (“embassies looking similar and like fortresses”), however, at a price (arguably longer and more expensive projects). The decision clearly needs to be made whether it is worth that price. While the SED program was severely limiting, it is my belief that there is a middle ground between the SED and the Excellence program—where OBO has a set of standards, specifications, and requirements that are clearly communicated to the design firm while allowing them to customize the footprint and design features of the building. OBO has buildings in its portfolio that were built using “SED Criteria” that are not “prison-like” and forbidding. It can be done in a thoughtful and efficient way that would appeal to people architecturally. In plain words—OBO didn’t have to throw the baby out with the bathwater. On the positive side, designing unique facilities improves the aesthetics of the U.S. presence abroad and sets an example for building system efficiency and innovative systems to the world. On the negative side, each unique design requires re-inventing the wheel and creates additional challenges for the designer to integrate physical and technical security, as well as accounting for building maintenance and upkeep. These designs are often difficult to implement overseas, and the state-of-the-art systems are difficult to maintain in some countries with unique equipment and a significant increase in facilities staff. Unique designs are more difficult and frequently more expensive to implement, creating an impact on time and cost. The Excellence program designs buildings that are architecturally pleasing, but in order to achieve nice aesthetics and achieve the security mandates that embassies must adhere to, I believe a cost premium exists in order to meet the aesthetics of the embassies designed under the Excellence program. I think a compromise can be achieved, and in that compromise some cost savings could be realized, but in my opinion the pendulum has swung completely opposite that of the SED, and the evidence is in a cost comparison between the embassies designed under the SED program to those designed under the Excellence program. The designs of are more challenging or difficult to introduce, implement, and execute. Every design is unique and different, takes longer to evaluate, and requires more coordination; more issues are encountered that need to be resolved. The construction of is also more challenging or difficult to execute. The distinct construction materials used are more expensive, take longer to procure and transport, and are more difficult to install. Therefore, the schedule is longer, especially if there are more issues encountered during design and construction, and the cost of construction and contingencies are much higher. SEDs were all about schedule and budget and when either of these were threatened, things were de-scoped or glossed over. Many times those were support annexes (shops, storage facilities) that allowed the facilities to be properly maintained or morale/ welfare/recreation facilities or amenities, meaning that these things had to be added after the fact, putting a great deal of pressure on posts. Excellence does a better job in terms of incorporating architectural features, energy efficiency and sustainability design, and delivering a fully realized compound. I believe that the Excellence program produces a better product and platform for diplomacy; however, most of us also acknowledge that the SED program gave us the ability to execute faster when needed. They should not be mutually exclusive. There are principles in the SED that can be applied and probably should not have been just “thrown out.” It should have been a good solution for an expeditious need, in a place where appropriate. We also know that it did not work well everywhere, and in places with stringent code, zoning, or other restrictions (including the desire to operate in an urban context), it was definitely not a good solution. As long as the Front Office doesn’t focus solely on aesthetics, the Excellence program should produce some outstanding facilities that have far more flexibility than the SED, better functionality, and improved suitability for the country in which they reside. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Many responses to open-ended questions in the survey touched upon issues we reviewed through our separate audit work. We have included some of these responses for illustrative purposes in the text boxes that follow. In some cases, we edited comments for clarity or grammar. Selected Survey Comments on Concerns with SED The SED program was a failure for site specific implementation. It is generally known around OBO that there was no such thing as a real SED building. Due to the technical requirements and the site specific requirements, all SED compounds required adaptation, usually at significant cost and time. The SED had plenty of problems. Yes, the main issue was the boring and unimaginative concrete box; however, it did produce embassies. There is a middle road that opens up the options and opportunities for improved design without going overboard. The SED program was effective in quickly and efficiently providing safe and secure facilities to perform diplomatic activities overseas using the delivery method. However, the designs were not very attractive and didn’t represent the best in American architecture. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Selected Survey Comments on the Desire to Improve Embassies under the Excellence Approach The Excellence initiative is a better way to design the Department’s projects and buildings. It takes advantage of innovation in the design generated by architects and engineers from different points of view. The SED program produced more and more generic buildings that translated into more or less a template mentality. The architecture is less fortress-like, more approachable to local populations, more culturally sensitive (more diplomatic). It is more attractive and something that Americans can be prouder of, as well as being greener and increasingly sustainable. From my limited experience with OBO, it appears as though the Excellence program attracts better design firms to the program. Better design firms should result in a better end product. The Excellence program also appears to result in more excitement from the host country and allows for embracing culturally specific designs. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Selected Survey Comments on Industry and Senior OBO Management Design Reviews The incorporation of the Industry Advisory Reviews gives design firms real-time feedback from their peers on not only how to support the Excellence program, but how to save costs and make buildings more efficient during the design process. I think that the new processes are moving forward, and we are seeing some positive results with stronger designs and hopefully, construction. For example, there were positive reactions by both the staff and private sector architects to the Industry Advisory Reviews that I attended. Too much time and effort is spent on reviews by OBO Front Office and outsiders. Front Office seems to be inconsistent in what it wants in a design and will change its mind from one review to the next. We are hiring professional design firms. They should be able to do their jobs. The design process has been significantly impacted by the introduction of numerous senior management and Industry Advisory Group reviews. These reviews are expensive and time consuming. An independent evaluation of their value versus their cost is warranted. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Selected Survey Comments on OBO Guidance Related to the Excellence Approach I do believe OBO as a whole is trying to maintain and keep up with outdated policies. In my observations OBO is very understaffed or doesn’t have the right mix of professionals necessary to maintain policy and regulations. There is no mention anywhere of changes to the security program brought on by Design Excellence. In my opinion, no one has ever thought through what it means to construct anything other than a SED. There is no Design Excellence guidance for security professionals. The Guiding Principles are widely available to staff and clearly reflect the vision of the senior leadership. The Design Excellence guide took 2 years to write; then they wanted the operations and maintenance piece added but gave us initially only 2 weeks. Case in point. Just yesterday an OBO notice came out on the Excellence Initiative. No doubt it had something to do with the timing of this survey. OBO has been working to modify policies and procedures directives (PPDs), etc.; however, it takes time, and it is difficult to get people to take the time away from their regular duties to step back and look at policies. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Selected Survey Comments on the Need for Excellence-Specific Performance Metrics I have not seen any performance metrics that compare before/after the Excellence Initiative. What are the performance metrics used to measure this ? Senior OBO management has difficulty objectively articulating Design Excellence goals or even attempting to measure results. I am not aware of any metrics for evaluating the “Excellence” of a project and the many facets involved in assessing a building. What metrics does OBO employ to measure “effectiveness” or “efficiency,” specifically, strategically? Answer: none. Each project takes as long as it takes. The project costs what it costs. Several presentations have been given and a few documents produced to try to articulate Design Excellence; however, much of the concept is subjective, due to the emphasis on architecture, and cannot be measured or effectively monitored for performance. There are no real performance measures; total operations and maintenance costs are unknown. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Selected Survey Comments on the Need for Better Project Data and Systems Project database does not have a user manual or guide, either. Who does this? Why create a database and never provide a user guide or how-to manual for the user of the data? We are supposed to have a new and better projects database. We used the same databases/networks and computers for the last 10+ years. Not enough server space to store the size/volume of electronic files/docs that we produce. The “Ideal Operational State” research/data collection effort, however, we hope, will resolve some of these technological issues. The Ideal Operational State effort is an excellent opportunity to manage the Life Cycle Cost Analysis of the entire business process life cycle. The existence of the Ideal Operational State, while itself a very good thing, indicates the need for more strategic alignment of process with and definition of a vision. Legend: OBO = Bureau of Overseas Buildings Operations; SED = Standard Embassy Design. Appendix V: Guiding Principles of Excellence in Diplomatic Facilities; U.S. Department of State Bureau of Overseas Buildings Operations Construction and craftsmanship Intent Embassies and consulates have two essential purposes: to be safe, secure, functional, and inspiring places for the conduct of diplomacy, and to physically represent the U.S. government to the host nation. The site and location of an embassy have practical as well as symbolic implications. OBO will develop sites that best represent the U.S. government and its goals, and enhance the conduct of diplomacy. Whenever possible, sites will be selected in urban areas, allowing U.S. embassies to contribute to the civic and urban fabric of host cities. OBO will evaluate designs on the basis of their success in skillfully balancing requirements, and on how well the design represents the United States to the host nation. Designs are to be functionally simple and spatially flexible to meet changing needs and be enduring over time. An official embassy style will be avoided. Designs will be cost-effective. Each design will be responsive to its context, to include the site, its surroundings, and the local culture and climate. The designs will make use of contextually appropriate and durable materials and incorporate the latest in security and safety features. The grounds should be functional and representational spaces. They will be sustainable, include indigenous plantings, and incorporate existing site resources, such as mature trees. The facilities will incorporate advanced methods, systems, technologies, and materials appropriate to the facility and local conditions, including the site, climate, natural hazards, security, and the practical reality of construction, maintenance, and operations in the host nation. The safety and security of staff and visitors is paramount. Designs and construction will meet or exceed all security and safety standards and specifications. Buildings and grounds will incorporate sustainable design and energy efficiency. Construction, maintenance, and operations practices will be sustainable. OBO will hire leading American architects and engineers. Selection will be based on the quality of their design achievements and portfolio of work. The selection methodology will be open, competitive, and transparent. Construction professionals will be engaged throughout the process to ensure the best possible design and implementation. OBO is committed to selecting the most qualified building contractors with a record of delivering high quality projects. Operations and maintenance Operations and maintenance professionals will be engaged throughout the design and construction Historically, architecturally, or culturally significant properties and collections process. Buildings and sites will be economical to operate and maintain and will utilize equipment and materials that are durable, dependable, and suitable. Designs will be based on life-cycle analysis of options that take into account longterm operations and maintenance. Embassy buildings and grounds are an opportunity to showcase the best of American and host nation art and culture. OBO is committed to integrating art into its facilities such that each property will be both an individual expression of Excellence and part of a larger body of work representing the best that America’s designers and artists can leave to later generations. OBO is committed to preserving the State Department’s historical, cultural, and architectural legacy. The Secretary of State’s Register of Culturally Significant Property is the official listing of important diplomatic architecture overseas and properties that figure prominently in our country’s international heritage. Appendix VI: Approaches to Achieve Excellence; U.S. State Department, Bureau of Overseas Buildings Operations Appendix VI: Approaches to Achieve Excellence; U.S. State Department, Bureau of Overseas Buildings Operations Approaches to Achieve Excellence; U.S. State Department, Bureau of Overseas Buildings Operations (OBO) Strategy General Intent Holistic approach to project Project teams include all key stakeholders such as users, tenant agencies, and OBO disciplines, as delivery well as members of the architectural, engineering, and construction contractor teams. Information technology (IT) OBO’s IT platform integrates and makes available all project information, promoting effective review, Architect and engineer (A-E) selection Design process Construction contractor selection Best value contract award communication, and decisionmaking during project development, construction, maintenance and operations. OBO uses either Design/Build or Design/Bid/Build. Neither delivery method is a default. Context, complexity, construction environment, and urgency are evaluated when selecting a method for each project. OBO recognizes the representational and symbolic importance of site location. OBO has revised site scoring criteria to acquire sites in urban areas. OBO considers redevelopment of strategically- located U.S. government owned sites. OBO contracts with the most talented A-E firms, whether long-established or emerging new firms. The selection process focuses on the portfolio of work, team members, and past performance. OBO effectuates high-quality design through design processes such as on-site design charrettes, on-board working sessions, constructability and maintainability reviews, senior management approvals, and peer reviews. The Guiding Principles outline the fundamental design goals of all of our projects. These include the integration of purpose, function, security, safety, flexibility, sustainability, maintainability, and art. OBO uses sustainability principles and life-cycle cost analysis to ensure that facilities provide the lowest overall long-term cost of ownership, consistent with quality and function. OBO is working to expand the pool of contractors and reach out to new emerging firms to promote competition and ensure the best outcome. OBO is using the Best Value method, which includes factors such as past performance and team qualifications, as well as consideration of lifecycle costs in the evaluation process. Early contractor involvement OBO involves construction contractors in early stages, particularly on long-term and complex projects, to ensure the best outcome and reduce risk. Operations and maintenance Reference guides and training programs are developed for each major project. The guide includes Revised architectural and engineering design guidelines Recognizing Excellence information such as design intent, systems information, maintenance requirements, and troubleshooting. This ensures that facilities are operated and maintained properly and that future modifications to the building are in keeping with the original design intent. A Guide to Excellence in Diplomatic Facilities has been released on the OBO website (www.state.gov/obo). The guide is comprehensive and highlights how Excellence goals and priorities will be achieved in each phase of a project. The Guide is a basic “how to” manual. Design requirements have been revised to support Excellence. The requirements emphasize high- performance buildings, flexibility and best design practices, while moving away from a fixed solution. An Excellence awards program is in development. The program will recognize projects that exemplify Excellence. Appendix VII: Timing of Industry Advisory Reviews within the Bureau of Overseas Buildings Operations Design Process Appendix VIII: Comments from the U.S. Department of State Appendix IX: GAO Contacts and Staff Acknowledgments GAO Contact Michael J. Courts, (202) 512-8980 or courtsm@gao.gov David J. Wise, (202) 512-5731 or wised@gao.gov. Staff Acknowledgments In addition to the contact named above, Leslie Holen (Assistant Director, International Affairs and Trade), Michael Armes (Assistant Director, Physical Infrastructure), David Hancock, John Bauckman, Eugene Beye, Melissa Wohlgemuth, and Elisa Yoshiara made key contributions to this report. Technical assistance was provided by David Dayton, Jill Lacey, Alex Welsh, and Neil Doherty.
In 1998, terrorists bombed the U.S. embassies in Nairobi, Kenya, and Dar es Salaam, Tanzania, killing over 220 people and injuring 4,000. In 1999, State began a new embassy construction program, administered by OBO, which to date has received $21 billion, according to State. OBO's primary goal was to provide secure, safe, and functional workplaces, and it adopted SED with a streamlined, standard design for all embassies. In 2011, OBO replaced the SED with the Excellence approach, which makes use of customized designs for each embassy. GAO was asked to review the implementation of Excellence. This report examines (1) the reasons for State's shift to the Excellence approach, (2) key elements and tradeoffs of the new approach, and (3) the extent to which State has established guidance and tools to implement and evaluate its Excellence approach. GAO analyzed information from State policy, planning, funding, and reporting documents and interviewed State and industry officials. GAO also surveyed OBO staff about, among other things, the sufficiency of OBO's policies, procedures, and technical guidance for the Excellence approach. GAO will examine project cost and schedule issues in a subsequent report. In 2011, the U.S. Department of State's (State) Bureau of Overseas Buildings Operations (OBO) established the Excellence approach in response to concerns regarding the aesthetics, quality, location, and functionality of embassies built using its Standard Embassy Design (SED). The SED utilized a standard prototypical design for new embassies and consulates along with a streamlined delivery method combining responsibility for design and construction under a single contract. Under the Excellence approach, OBO now directly contracts with design firms to develop customized embassy designs before contracting for construction. OBO officials believe that greater design control under Excellence will improve embassies' appearance in representing the United States, functionality, quality, and operating costs. Excellence consists of several key elements and involves trade-offs. For example, OBO now allots time and funding to develop customized designs and hires leading design firms to produce them. These design firms have faced initial adjustment challenges designing U.S. embassies, and OBO only recently began evaluating their performance as required by federal regulation. OBO's new approach poses cost and schedule trade-offs since, for example, OBO now has greater design control but may also be responsible if design problems are identified during construction. GAO's survey found that OBO staff who responded held split or conflicting opinions on Excellence compared with SED. While OBO has established guidance to implement Excellence, it lacks tools to fully evaluate the performance of this new approach. Performance measures are essential tools for managers to evaluate progress toward a program's goals, as noted in Standards for Internal Control in the Federal Government . However, OBO has not established performance measures to specifically evaluate and communicate the effectiveness of Excellence in delivering embassies. Moreover, OBO's bureau-wide strategic measures do not address Excellence priorities, such as greater adaptability to individual locations, functionality, or sustainability. OBO also lacks a reliable system to monitor operating performance, such as building energy usage, and a centralized database to broadly manage the Excellence program, to include effectively reporting on projects' design and construction costs and schedules. Without performance measures and reliable systems to collect and analyze relevant data, OBO cannot fully assess the value of shifting to the Excellence approach and away from the SED.
Background VA comprises three major components: VBA, which provides benefits to veterans and their dependents; VHA, which provides health care services through the nation’s largest health care system; and the National Cemetery System, which provides burial services in 115 national cemeteries. In fiscal year 1997, VBA reported that it paid about $23 billion in benefits to about 4.4 million veterans and their dependents. VBA distributes these benefits through five programs: compensation and pension (the largest), educational assistance, housing loan guaranty, vocational rehabilitation and counseling, and insurance. VBA administers its benefits programs through one or more of 58 regional offices supported by three data processing centers. During this same period, VHA reported that it spent $17 billion providing medical care to about 3 million veterans. Such care is managed through 22 Veterans Integrated Service Networks (VISN) which are geographically dispersed throughout the country. VISNs manage 711 separate facilities: 172 VHA medical centers, 376 outpatient clinics, 133 nursing homes, and 30 domiciliaries. VA has 11 mission-critical system areas. As shown in table 1, VBA has six of these areas and VHA has two. About two-thirds of the VBA software applications—95 of the 151—are considered business priorities, essential to the department’s mission. Over one third of VHA’s applications—104 of the 283—are business priorities critical to the department. Objective, Scope, and Methodology The objective of this review was to assess the status of VBA’s and VHA’s Year 2000 programs. In conducting this review, we applied criteria from our Year 2000 Assessment Guide, and our Year 2000 Business Continuity and Contingency Planning Guide. We reviewed Year 2000 documents developed by VA, including its June 1997 Year 2000 Solutions document; its January 13, 1997, Year 2000 Readiness Review; and its quarterly Year 2000 reports to OMB for May 1997, August 1997, November 1997, February 1998, and May 1998. In assessing the status of VBA’s Year 2000 program, we reviewed and analyzed numerous VBA documents, including its October 1997 Year 2000 Risk Assessment; its January 1998 response to our May 1997 report, and monthly Year 2000 progress reports to VA. We also reviewed and analyzed VBA project plans, schedules, and progress reports for its replacement and conversion initiatives. In addition, we met with and/or interviewed project teams, including contractor support, in Washington, D.C., and Roanoke, Virginia, as well as computer personnel at VBA’s data centers at Hines, Illinois, and Philadelphia, Pennsylvania, and its Austin Systems Development Center to discuss their Year 2000 activities. We also discussed VBA’s Year 2000 efforts with VBA headquarters officials in Washington, D.C.; VBA regional office officials in Baltimore, Maryland; Chicago, Illinois; Roanoke, Virginia; Philadelphia, Pennsylvania; Waco, Texas; St. Paul, Minnesota; St. Petersburg, Florida; and representatives from VA’s Office of Information Resources Management. In assessing the status of VHA’s Year 2000 program, we reviewed and analyzed numerous VHA documents, including its April 30, 1997, Year 2000 Plan; its July 1997 Year 2000 Product Risk Program; its January 30, 1998, Assessment Phase Report; its strategic plan; and monthly reports to VA. We also reviewed the Year 2000 plan prepared by each of VHA’s 22 VISNs. We also reviewed and analyzed the backup and contingency plans prepared by the VISNs and VISN cost estimates. In addition, we met with and/or interviewed VISN project team members in Pittsburgh, Pennsylvania; Philadelphia, Pennsylvania; Wilmington, Delaware; Washington, D.C.; Baltimore, Maryland; Martinsburg, West Virginia; and Chicago, Illinois. We also met with VHA software development staff at Silver Spring, Maryland, and Hines, Illinois, to discuss progress and problems involved with their Year 2000 activities. We also discussed VHA’s Year 2000 efforts with VHA headquarters officials in Washington, D.C., and a representative from VA’s Office of Information Resources Management. We performed our work from July 1997 through June 1998, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Veterans Affairs or his designee. The Assistant Secretary for Policy and Planning provided us with written comments that are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Progress Made in VBA Yet Concerns Remain Regarding Applications, COTS Products, and Contingency Planning VBA has made progress in addressing the recommendations in our May 1997 report and in making its information systems Year 2000 compliant. However, concerns remain surrounding the renovation of two key mission-critical applications, assessment of COTS software products, and contingency planning. Unless these issues are addressed, the issuance of benefits to veterans and their dependents could be delayed or interrupted. Progress Made in Addressing the Year 2000 Problem In May 1997, we made numerous recommendations designed to correct weaknesses in VBA’s Year 2000 efforts. These were in the areas of program management, systems assessment and prioritization, completion of inventories and development of plans for addressing internal and external software interfaces, prioritizing information technology projects to make the best use of limited resources, and developing contingency plans for critical business processes. Additionally, in September 1997, we expressed concern about the compressed schedules of key software developments and renovations, especially for VBA’s largest and most critical payment system, compensation and pension. The schedule for this system was compressed because computer analysts responsible for it were not available and were working on legislatively mandated changes and cost-of-living increases. In addition, VBA had not completed assessments of its internal and external data interfaces. In concurring with all our recommendations and addressing other concerns, VBA has taken several actions. For example, it changed its Year 2000 strategy from developing new systems to converting existing ones. Second, a single VBA Year 2000 project office now oversees and coordinates all VBA Year 2000 activities. Third, VBA has completed inventories for its mission-critical systems, data interfaces, and third-party products, and assessed the products in these inventories for Year 2000 compliance. Finally, VBA has developed a schedule for replacing and/or converting all noncompliant systems or products. VBA has also made progress in renovating its noncompliant software applications. Since July 1997, it has reported increasing from 50 to 75 the percentage of renovated mission-critical applications. It also has reported completing renovation for its vocational rehabilitation and insurance systems, and making significant progress in its education system. VBA has also recently completed inventories and assessed Year 2000 compliance for applications developed at its regional offices. In addition, to address our recommendations and concerns regarding its data interfaces, VBA has developed Year 2000-related agreements with its external trading partners for 278 of 287 interfaces. According to VBA’s Year 2000 Project Manager, VBA has developed a bridge for five of the nine interfaces remaining without agreements so that information from these partners can be converted into an acceptable format. VBA must still work out agreements with the Department of Defense on the remaining four interfaces, which deal with payment systems. In these cases, the external partner has not determined the data format for the interfaces so VBA can ensure that the data are acceptable to its systems. The Project Manager informed us that VBA will continue to work with Defense in resolving this problem. Risks Remain Regarding Renovation of Mission-Critical Applications, COTS Products, and Contingency Planning Despite VBA’s progress in addressing the Year 2000 problem, areas of risk remain. These include (1) limited progress in making two key mission-critical applications compliant, (2) the unexpected need to reassess its COTS software products that VBA was previously assured were fully Year 2000 compliant, and (3) the lack of business continuity and contingency plans for core business processes. If these issues are not adequately addressed, benefits to veterans could be delayed or interrupted. Making Mission-Critical Applications Compliant VBA has made limited progress in renovating its compensation and pension online application. Specifically, the level of reported renovation has remained at 18 percent for several months. As we pointed out in our September 1997 testimony, one factor for this slow pace has been that computer analysts responsible for Year 2000 renovations have been working on legislative mandates, special projects, and yearly changes, such as cost-of-living increases and clothing allowances. To address this problem, VBA contracted for additional Year 2000 support. The contractor was tasked with analyzing the software code using a software conversion tool, providing the tool’s recommendations to VBA analysts, and making renovations, once approved by VBA analysts. However, the Project Manager informed us that only 35 percent of the renovations could be made using the tool. The remaining 65 percent must be made by the analysts because they involve benefit calculations or other issues that the tool cannot handle. These same analysts must also review and test all Year 2000 changes, including those made by the contractor, as well as continue to work on legislative mandates, special projects, and other changes before 2000. In commenting on a draft of this report, VA stated that VBA’s compensation and pension online application is now 59 percent compliant, up from the 18 percent noted in our report. VBA’s Year 2000 Project Manager told us that the progress was due to use of a software development tool and that additional resources had been assigned to this project, as we recommended. The milestones for the compensation and pension online application include completion of renovation by September 4, 1998, and implementation by October 5, 1998. VBA is also having problems renovating the Beneficiary Identification and Record Locator Sub-System, a software application in its mission-critical administrative system. VBA hired a contractor to modernize and renovate the system to ensure Year 2000 compliance. The contractor, however, had difficulty meeting its deliverable dates. For example, it did not complete renovation in April 1998, as originally planned, or in June 1998, as rescheduled. According to VBA’s Year 2000 Project Manager, the contractor did not assign sufficient staff to the project and did not have the expertise to perform the work on time. She informed us that the contractor and VBA have agreed to a third schedule, with specific contract deliverables scheduled in July and August 1998. If these dates are not met, VBA plans to invoke a contingency plan to fix this system using in-house resources currently working on non-Year 2000 work, such as VBA’s common security project. Reassessing COTS Products VBA, along with other consumers, will have to reassess some of its COTS products. According to VBA’s Year 2000 Project Manager, one of its largest vendors, which initially informed VBA that its products were Year 2000 compliant, recently told VBA that some of its products were not compliant and announced that it was beginning to assess and test its product line for Year 2000 compliance. Because this vendor accounts for about 30 percent of VBA’s COTS products—including a database management system used by many VBA software developers and VBA’s electronic mail software, which is used to communicate between its regional offices, VBA headquarters, and the data centers—VBA will have to replace and/or upgrade these products. Because of this reassessment by one vendor, VBA now plans to review and reassess all of its COTS products. Developing Business Continuity and Contingency Plans In May 1997, we reported that VBA should develop contingency plans for its core mission-critical business processes and assess the impact of Year 2000 failures on its mission-critical program services. To assist agencies with this, we have prepared a guide that discusses the scope of the Year 2000 challenge and offers a step-by-step approach for reviewing an agency’s risks and threats as well as how to develop backup strategies to minimize these risks. The business continuity planning and contingency planning process outlined in the guide safeguards the agency’s ability to produce a minimally acceptable output if internal or external mission-critical information systems and services fail. The guide also states that an agency should develop business continuity and contingency plans for each core business process or program service and that the plans should provide a description of resources, staff roles, procedures, and timetables needed for implementation. VBA has generally not developed business continuity and contingency plans for its program services to ensure that they would continue to operate if Year 2000 failures occur. Only VBA’s Insurance Service has developed such a plan. The insurance plan addresses potential business impact, contingencies and alternatives, and triggers for activating contingencies and contains a high-level plan for the December 31, 1999, weekend. According to the VBA Year 2000 Project Manager, VBA is aware of the need for Year 2000 business continuity and contingency plans and has hired a contractor to develop a framework for developing plans to address all aspects of business operations, not just information systems. This framework is to be used to assist VBA in preparing a Year 2000 business continuity plan that would allow it to maintain a minimal acceptable level of service. VBA currently does not have a date for when this framework will be completed, but it plans to have business continuity and contingency plans completed for each of its program services by January 1, 1999. VHA Has Made Progress in Renovating Mission-Critical Systems, but Concerns Remain VHA has made progress in assessing and renovating its two mission-critical systems for Year 2000 compliance. Since our September testimony, it has reportedly completed assessment and 98 percent of renovation. However, despite this progress, concerns remain about the Year 2000 compliance status of VHA’s locally developed software applications, COTS products, facility systems, and biomedical devices. Also, VHA’s medical facilities have not completed Year 2000 business continuity and contingency plans. Until its local software applications, COTS products, facility systems, and biomedical devices are determined to be Year 2000 compliant, VHA lacks assurance that its delivery of medical care to veterans will not be delayed or interrupted at the turn of the century. Reported Progress Made in Assessing and Renovating Mission-Critical Systems VHA has progressed in assessing and renovating its mission-critical systems. It reportedly completed the assessment of the Veterans Health Information Systems Architecture (VISTA) and VHA corporate systems at the end of January 1998. VHA reported at that time that about half of the 147 VISTA software applications would have to be renovated, as would about 10 percent of the VHA corporate systems applications. It also completed assessment of related data interfaces and reported that 6 percent of these interfaces would have to be renovated. In addition, according to its June 1998 Year 2000 Status Report to VA, VHA renovated 100 percent of its VISTA software applications and 95 percent of the VHA corporate systems by the end of May 1998. Further, VHA reports that 93 percent of the VISTA and 90 percent of the VHA corporate software applications have been validated. For the VISTA software applications, validation includes (1) testing by programmers and systems development groups, (2) testing by medical facility staff, (3) quality assurance testing, and (4) final testing by the customer services unit prior to release. Testing for the VHA corporate systems is similar to the process for the VISTA applications. VHA also plans to conduct end-to-end testing along with VBA in July 1999. The VHA Year 2000 Project Manager told us that as an added precaution, VHA has decided to hire a contractor to conduct an independent verification and validation of VHA’s compliance process for mission-critical systems, especially its corporate systems. VHA will be developing the requirements and a statement of work for this effort and expects to award a contract by the end of September 1998. It also expects to have the independent verification and validation completed by the end of December 1998. Remaining Concerns About VHA’s Year 2000 Program In our September 1997 testimony, we expressed concern that some of the medical facilities have customized the national VISTA software applications and/or purchased software add-ons to work with the national applications, and these have not been assessed for Year 2000 compliance. Further, we expressed concerns that VHA had not completed an inventory of its COTS products, facility-related systems and equipment, and biomedical devices. The medical facilities still have not completed their assessment of locally developed and/or modified VISTA applications. The most recent information shows that about 76 percent of these applications had been assessed as of May 31, 1998. Of the applications assessed, 16 percent are noncompliant. All medical facilities were scheduled to complete renovations by July 1998. According to the VHA Year 2000 Project Manager, the facilities did not meet this date. He also added that the project office is not aware of any modifications by the medical facilities to the VISTA applications involving mission-critical functions. Recognizing that the medical facilities are behind schedule, VHA’s Year 2000 Project Manager has told us that the medical facilities have been informed that if they cannot assess or renovate their locally developed or modified VISTA software applications in time—as yet unspecified—they will have to replace them with unmodified national versions. This appears to be a viable solution since VHA has reportedly renovated 100 percent of its VISTA applications, and 93 percent of these have been validated. A second area of concern relates to the Year 2000 compliance status of VHA’s COTS software. VHA has over 3,000 such products, supplied by nearly 1,000 vendors, for use in its offices and medical facilities nationwide. Its strategy for determining the Year 2000 compliance status of these products is to request the information from the manufacturers. VHA, however, has had difficulty in obtaining this information. Since November 1997, it has sent letters to 566 manufacturers requesting compliance information; as of June 1998, 260 responses had been received, leaving over 300 outstanding. The Year 2000 Project Manager informed us that they will continue to send follow-up letters to nonresponding manufacturers. The reliability of the information received from COTS vendors may also be an issue. For example, one large COTS software manufacturer that initially provided documentation stating that all its products were compliant, recently published “more detailed” information stating that some of its products were, in fact, not fully compliant. VHA is now reviewing this additional information to determine what needs to be done at the headquarters and field levels to ensure Year 2000 compliance. A third area of risk involves the Year 2000 compliance status of VHA’s facility-related systems and equipment, such as heating, ventilating, and air conditioning equipment. As with its Year 2000 strategy for COTS products, VHA relies on the manufacturers for compliance information on their products. It sent a letter to over 200 facility systems/equipment manufacturers requesting Year 2000 compliance information. In March 1998, VHA sent follow-up letters to 127 nonrespondents. To date, it has received responses from 93 of the over 200 manufacturers, and most of the 93 manufacturers have indicated that their systems are compliant. The slow response rate is a concern because of its potential impact on the ability of facilities to ensure continued delivery of essential services. For example, heating, ventilating and air conditioning equipment are utilized by hospitals to ensure that contaminated air is confined to a specified area of the facility, such as an isolation room or patient ward. If computer systems used to maintain these systems were to fail, any resulting climate fluctuations could adversely affect patient safety. The facility-related systems area is difficult to gauge because many of these systems involve a combination of items made by different manufacturers. Year 2000 compliance must be determined for each element. Accordingly, the manufacturer must contact all of its suppliers to determine whether each of the applicable parts of the facility system is compliant. This process can take a long time to complete, and VHA is concerned that it may not have sufficient lead time to replace the product or have it made compliant. The VHA Year 2000 Project Manager informed us that VHA will continue to send follow-up letters to nonresponding manufacturers, as well as work with the Year 2000 Subcommittee on Facilities of the federal government’s CIO Council Committee to increase manufacturer response. A fourth area of concern relates to the Year 2000 compliance status of VHA’s biomedical device inventory. Because VHA relies on the manufacturers for compliance information on their products, it has sent a series of letters to the manufacturers requesting information on the compliance status of their devices. About 70 percent of the manufacturers on its list of suppliers have responded with information on their devices; but some high-profile manufacturers have yet to respond. Until the remaining manufacturers respond, VHA will not know the extent to which its current biomedical device inventory is Year 2000 compliant and the cost associated with making its devices compliant. Also, the Food and Drug Administration (FDA) has sent letters to 16,000 biomedical equipment manufacturers. To date, about 10 percent of these manufacturers have responded. The detailed results of our review of VA’s and FDA’s Year 2000 biomedical device programs will be provided in a separate report. Lastly, although VHA medical facilities have prepared hospital contingency plans, as required by the Joint Commission on Accreditation of Healthcare Organizations, our review showed that these plans do not specifically address Year 2000-related failures. Because VHA medical facilities, like other health care facilities in the public and private sectors, are highly dependent upon information technology to carry on their business, Year 2000-induced failures of one or more mission-critical systems can have a severe impact on their ability to deliver patient care. This risk of failure is not limited to the medical facilities’ internal information systems, but includes information and data provided by their business partners—other federal, state, and local agencies as well as local and international private entities—and services provided by the public infrastructure, including power, water, transportation, and voice and data communications companies. VHA also relies on information provided by the manufacturers. VHA cannot, by itself, ensure that its systems-related software applications and/or products are Year 2000 compliant. Accordingly, it is critical that VHA develop business continuity and contingency plans to address the potential Year 2000 failures induced by its business partners and infrastructure service providers, especially those manufacturers who have not provided compliance information to VHA on their products. VHA’s Year 2000 Project Manager has acknowledged the need for Year 2000 business continuity and contingency plans. He told us that the Year 2000 Project Office has informed VISNs and medical facilities that they need to address Year 2000-induced failures in their business continuity and contingency plans. Furthermore, VHA also plans to develop a contingency planning guidebook that will assist the medical facilities in preparing Year 2000 business continuity and contingency plans to address all VHA systems and products that may affect patient safety. However, he did not know when the guidebook will be finalized and distributed to the medical facilities for implementation. Conclusions VBA has made progress in responding to the Year 2000 computer crisis. However, it faces risks in several areas. It has made limited progress in making two key mission-critical applications compliant. For example, computer analysts responsible for renovating the compensation and pension online application have been working on other information technology initiatives, including special projects. VBA also has to reassess some of the COTS products that the vendor previously stated were fully Year 2000 compliant. In addition to these risks, except for the Insurance Service, VBA has not developed business continuity and contingency plans for its program services to ensure that they would continue to operate if Year 2000 failures occur. Not adequately addressing these concerns could delay or interrupt benefits to veterans. VHA also has made progress on the Year 2000 problem. However, it also faces several remaining risks because it has not completed Year 2000 assessments of software applications developed or modified by its medical facilities. Furthermore, it still lacks a great deal of compliance information from manufacturers that provide it with COTS products, facility-related systems and equipment, and biomedical devices. These uncertainties are all the more worrisome in light of VHA’s lack of Year 2000 business continuity and contingency plans. Until these concerns are addressed, VHA lacks assurance that its delivery of medical care to veterans will not be delayed or interrupted by Year 2000 failures. Recommendations To reduce the likelihood of delayed or interrupted benefits, we recommend that the Secretary of Veterans Affairs, with support from VBA’s Chief Information Officer (CIO), ensure that VBA: Reassesses its Year 2000 mission-critical efforts for the compensation and pension online application and the Beneficiary Identification and Record Location Sub-System, as well as other information technology initiatives, such as special projects, to ensure that the Year 2000 efforts have adequate resources, including contract support, to achieve compliance in time. Establishes a milestone for the contractor-developed business continuity framework and subsequent critical dates for the preparation of business continuity and contingency plans for each core business process or program service so that mission-critical functions affecting benefits delivery can be carried out if software applications and COTS products fail. These plans should provide a description of resources, staff roles, procedures, and timetables needed for implementation. We also recommend that the Secretary, with support from the VHA CIO, ensure the rapid development of business continuity and contingency plans for each medical facility so that mission-critical functions affecting patient care can be carried out if software applications, COTS products, and/or facility-related systems and equipment do not function properly. These plans should address issues such as when to invoke alternative solutions and/or options if the manufacturer, who VHA depends on for compliance information, does not submit any. The plans also should describe resources, staff roles, procedures, and timetables needed for implementation. Agency Comments and Our Evaluation In commenting on a draft of this report, VA concurred with all three of our recommendations. VA stated that it is committed to ensuring that its benefit and health care services to veterans will not be adversely affected by the Year 2000 problem and it has applied dedicated resources to address these issues. However, VA stated that although this report recognizes the progress it has made in mitigating Year 2000 problems, VA does not believe that statements in the report adequately reflect its efforts. VA was concerned that statements in the report could be taken out of context and unnecessarily alarm veterans. We believe, however, that the report accurately reflects VA’s Year 2000 efforts and our resulting concerns. The report raises concerns surrounding the renovation of two key VBA mission-critical applications, COTS software products, and contingency planning, and the Year 2000 compliance status of VHA’s locally developed software applications, COTS products, facility-related systems, and biomedical devices. Failure to address these concerns may delay or interrupt the issuance of benefits and the provision of medical care to veterans. In addition, VA stated that it was concerned that the report portrayed issues, such as COTS products, facility-related systems, and biomedical equipment, as unique to VA. VA stated that, like any other consumer of these products, it is dependent upon manufacturers’ disclosure of their Year 2000 compliance, and some manufacturers are reluctant to supply compliance information on their products despite VA’s attempts to obtain it. We have revised the report to reflect that these issues are not unique to VA. Further, VA stated that since some manufacturers indicated that compliance information will not be available until late 1998, it will not spend what it termed unnecessary time developing continuity of business plans based on unrealistic assumptions. VA also stated that the expectation of its having compliance plans in place today is unrealistic considering that we just issued our Business Continuity and Contingency Planning Guide exposure draft in March 1998. We disagree with VA on these issues. First, our February 1997 exposure draft of the Year 2000 Assessment Guide called for agencies to develop realistic contingency plans to ensure the continuity of their core business processes. Second, our May 1997 report reiterates the need for agencies, including VA, to develop contingency plans for their major mission-critical business processes. Since our May 1997 report, only VA’s Insurance Service has developed such a plan. Third, because VA is dependent upon information from service providers and/or manufacturers who have yet to report on the compliance status of their services and/or products, it is critical that VA develop business continuity plans and contingency plans to address these services and/or products in the event that VA does not receive the information on the date promised. Further, if service providers and/or manufacturers provide assurance that their services and/or products are compliant, VA still needs to develop business continuity and contingency plans in the event that these services and/or products do not operate or do not function properly when processing data related to the Year 2000. Lastly, VA described actions it has taken and planned to implement our recommendations, as well as a number of technical suggestions to this report. These comments have been incorporated into the report as appropriate and are reprinted in appendix I. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies to the Ranking Minority Member of the Subcommittee on Oversight and Investigations, House Committee on Veterans’ Affairs, the Chairmen and Ranking Minority Members of the Subcommittee on Benefits, House Committee on Veterans’ Affairs, and the Subcommittee on Health, House Committee on Veterans’ Affairs. We will also provide copies to the Chairmen and Ranking Minority Members of the Senate and House Committees on Veterans’ Affairs and the Senate and House Committees on Appropriations, the Secretary of Veterans Affairs, and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at willemssenj.aimd@gao.gov if you have any questions concerning this report. Major contributors to this report are listed in appendix II. Comments From the Department of Veterans Affairs The following are GAO’s comments on the Department of Veterans Affairs’ letter dated July 31, 1998. GAO Comments 1. Discussed in “Agency Comments and Our Evaluation” section of report. 2. Our report now refers to “core business processes” as used in our Business Continuity and Contingency Planning Guide. 3. As we stated in our report, the detailed results of our review of VA’s and FDA’s Year 2000 biomedical devices will be provided in a separate report. 4. We modified the report to reflect “limited progress” as communicated during our briefing before the Senate Committee on Veterans’ Affairs staff. 5. Our report now refers to “limited progress” when discussing VBA’s efforts in renovating two of its key mission-critical software applications—compensation and pension online, and the Beneficiary Identification and Record Locator Sub-System. Regarding COTS products, we have clarified the report to indicate that this problem is not unique to VBA. We also replaced “announced” with “told” and added information to point out that a particular vendor is still assessing and testing its products for compliance. 6. We modified the report to delete the word “considerable.” Our statement that VHA has not completed its assessment and that it does not know the full extent of the Year 2000 problem at its medical facilities is accurate and consistent with the findings in the report. For example, because VHA is relying on the manufacturers of COTS products, facility-related systems, and biomedical devices to determine the compliance status of their products, it is critical that VHA obtain this information from the manufacturers. It should also consider contacting the manufacturers of COTS products and facility-related systems, who have yet to provide VHA with compliance information, by telephone and/or meet with them. Moreover, given the uncertainties surrounding the compliance status of its local software applications, COTS products, facility-related systems, and biomedical devices, VHA needs to develop business continuity and contingency plans to ensure that core business processes can be carried out in the event that its systems and products do not function properly on and after January 1, 2000. 7. Report revised to reflect agency comments. 8. The report now states that our concerns relate to renovation of two key mission-critical applications. 9. Report revised to say “application” and reflect milestones for compensation and pension online application. 10. We have added information to clarify the role of the contractor in renovating VBA’s compensation and pension online application. 11. Report changed to reflect agency comments. 12. We added language explaining that according to VBA’s Year 2000 Project Manager, one of VBA’s largest vendors told VBA that some of its products were not compliant and it was beginning to assess and test its product line for Year 2000 compliance. 13. As stated in VA’s comments to our draft report, 73 (about half) of the 147 VISTA software applications require renovation to achieve compliance, as would about 10 percent of the VHA corporate systems applications. 14. We added information to explain VHA’s plans to hire a contractor to conduct an independent verification and validation of VHA’s compliance process for mission-critical systems, especially its corporate systems. 15. Added the percentage of noncompliant applications within the number of locally developed and/or modified VISTA applications assessed as of May 31, 1998. 16. Report modified to include “with the exception of the Insurance Service.” 17. Report modified to include “Year 2000” in discussion of business continuity and contingency plans. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Helen Lew, Assistant Director Tonia L. Johnson, Information Systems Analyst-in-Charge J. Michael Resser, Business Process Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO assessed the status of the Department of Veterans Affairs' (VA) corrective action to prevent computer system failures at the turn of the century, focusing on: (1) the Veterans Benefits Administration's (VBA) Year 2000 program; and (2) the Veterans Health Administration's (VHA) Year 2000 program. GAO noted that: (1) VBA has made progress in addressing the recommendations in GAO's May 1997 report and making its information systems year 2000 compliant; (2) it has changed its year 2000 strategy from developing new applications to fixing the current ones and established a year 2000 project office to oversee and coordinate all VBA year 2000 projects; (3) it has also reportedly renovated 75 percent of its mission-critical applications as of June 1998, and completed renovation of two specific mission-critical systems--vocational rehabilitation and insurance; (4) despite this progress, concerns remain; (5) for example, VBA has made limited progress in renovating two key mission-critical software applications: (a) compensation and pension online, which processes claims benefits and updates benefit information; and (b) the Beneficiary Identification and Record Locator Sub-System; (6) VBA also has to reassess its commercial-off-the-shelf (COTS) products because one of its largest vendors, which initially informed VBA that its products were year 2000 compliant, recently informed VBA that some of its products were not compliant and that others were being assessed and tested; (7) this problem is not unique to VBA--it applies to all consumers of these products; (8) except for its Insurance Service, VBA has not developed year 2000 business continuity and contingency plans for its core business processes; (9) these issues could affect the timely processing of benefits to veterans and their dependents; (10) VHA has also made progress in addressing the year 2000 problem; (11) since September 1997, it has reported having assessed all and renovated the vast majority of its mission-critical information systems and having completed 98 percent of its renovation by June 1998; (12) however, concerns also remain; (13) for example, VHA does not know the full extent of its year 2000 problem because it has not yet completed its assessment of: (a) locally developed software applications or customized versions of national applications used by its medical facilities; (b) COTS products; (c) facility systems; and (d) biomedical devices; (14) VHA's efforts on several of these issues are complicated by the fact that it, like other consumers of these products, has to receive compliance information from the manufacturers, some of which have been slow to respond to VHA's requests for compliance information; (15) like VBA, VHA has not developed year 2000 business continuity and contingency plans; and (16) failure to adequately address these issues could result in disruptions in patient care at VHA medical facilities.
Background DHS’s mission is to lead the unified national effort to secure America by preventing and deterring terrorist attacks and protecting against and responding to threats and hazards to the nation. DHS also is to ensure safe and secure borders, welcome lawful immigrants and visitors, and promote the free flow of commerce. Created in March 2003, DHS has assumed operational control of about 209,000 civilian and military positions from 22 agencies and offices specializing in one or more aspects of homeland security. The intent behind DHS’s merger and transformation was to improve coordination, communication, and information sharing among the multiple federal agencies responsible for protecting the homeland. Not since the creation of the Department of Defense in 1947 has the federal government undertaken a transformation of this magnitude. As we reported before the department was created, such a transformation is critically important and poses significant management and leadership challenges. For these reasons, we designated the implementation of the department and its transformation as high risk; we also pointed out that failure to effectively address DHS’s management challenges and program risks could have serious consequences for our national security. Among DHS’s transformation challenges, we highlighted the formidable hurdle of integrating numerous mission-critical and mission support systems and associated IT infrastructure. For the department to overcome this hurdle, we emphasized the need for DHS to establish an effective IT governance framework, including controls aimed at effectively managing IT-related people, processes, and tools. DHS Components and IT Spending To accomplish its mission, the department is organized into various components, each of which is responsible for specific homeland security missions and for coordinating related efforts with its sibling components, as well as external entities. Table 1 shows DHS’s principal organizations and their missions. An organizational structure is shown in figure 1. Within the Management Directorate is the Office of the CIO, which is expected to leverage best available technologies and IT management practices, provide shared services, coordinate acquisition strategies, maintain an enterprise architecture that is fully integrated with other management processes, and advocate and enable business transformation. Other DHS entities also are responsible or share responsibility for critical IT management activities. For example, DHS’s major organizational components (e.g., directorates, offices, and agencies) have their own CIOs and IT organizations. Control over the department’s IT funding is vested primarily with the components’ CIOs, who are accountable to the heads of their respective components. To promote IT coordination across DHS component boundaries, the DHS CIO established a CIO Council, chaired by the CIO and composed of component-level CIOs. According to its charter, the specific functions of the council include establishing a strategic plan, setting priorities for departmentwide IT, identifying opportunities for sharing resources, coordinating multibureau projects and programs, and consolidating activities. To accomplish their respective missions, DHS and its component organizations rely extensively on IT. For example, in fiscal year 2006 DHS IT funding totaled about $3.64 billion, and in fiscal year 2007 DHS has requested about $4.16 billion. For fiscal year 2006, DHS reported that this funding supported 279 major IT programs. Table 2 shows the fiscal year 2006 IT funding that was provided to key DHS components. GAO Has Reviewed Several of DHS’s Mission-Critical IT Programs In view of the importance of major IT programs to the department’s mission, the Congress has taken a close interest in certain mission- critical programs, often directing us to review and evaluate program management, progress, and spending. Among the programs that we have reviewed are the following: ● US-VISIT (the United States Visitor and Immigrant Status Indicator Technology) has several major goals: to enhance the security of our citizens and visitors and ensure the integrity of the U.S. immigration system, and at the same time to facilitate legitimate trade and travel and protect privacy. To achieve these goals, US-VISIT is to record the entry into and exit from the United States of selected travelers, verify their identity, and determine their compliance with the terms of their admission and stay. As of October 2005, US-VISIT officials reported that about $1.4 billion had been appropriated for the program. ● The Automated Commercial Environment (ACE) is a Customs and Border Protection (CBP) program to modernize trade processing systems and support border security. Its goals include enhancing analysis and information sharing with other government agencies; providing an integrated, fully automated information system for commercial import and export data; and reducing costs for the government and the trade community though streamlining. To date, CBP reports that the program has received almost $1.7 billion in funding. ● The America’s Shield Initiative (ASI) program (now cancelled) was to enhance DHS’s ability to provide surveillance and protection of the U.S. northern and southern borders through a system of sensors, databases, and cameras. The program was also to address known limitations of the current Integrated Surveillance Intelligence System (ISIS) and to support DHS’s antiterrorism mission, including its need to exchange information with state, local, and federal law enforcement organizations. As of September 2005, ASI officials reported that about $340.3 million had been spent on the program. As of December 2005, the program was subsumed within the Secure Border Initiative, the department’s broader border and interior enforcement strategy. ● The Secure Flight program is developing a system to perform passenger prescreening for domestic flights: that is, the matching of passenger information against terrorist watch lists to identify persons who should undergo additional security scrutiny. The goal is to prevent people suspected of posing a threat to aviation from boarding commercial aircraft in the United States, while protecting passengers’ privacy and civil liberties. The program also aims to reduce the number of people unnecessarily selected for secondary screening. To date, TSA officials report that about $144 million has been spent on the program. ● The Atlas program is intended to modernize the IT infrastructure of Immigration and Customs Enforcement (ICE). The goals of the program are to, among other things, improve information sharing, strengthen information security, and improve workforce productivity. ICE estimates the life cycle cost of Atlas to be roughly $1 billion. ● The Student and Exchange Visitor Information System (SEVIS) is an Internet-based system that is to collect and record information on foreign students, exchange visitors, and their dependents—before they enter the United States, when they enter, and during their stay Through fiscal year 2006, the department expects to have spent, in total, about $133.5 million on this program. . The Rescue 21 program is to replace and mo Guard’s 30-year-old search and rescue communication system, National Distress and Response System. The modernization is to, among other things, increase the Coast Guard’s communication coverage area in the United States; allow electronic tracking of department vessels and other mobile assets; enable better communication with other federal and state systems; and provide for secure communication of sensitive information. The Coast Guard reports that it plans to spend about $373.1 million on the program by the end of fiscal year 2006. It also estimates program life cycle cost to be $710 million. IT Management Controls and Capabilities Are Important Our research on leading private and public sector organizations, as well as our past work at federal departments and agencies, shows that successful organizations embrace the central role of IT as an enabler for enterprisewide transformation. These leading organizations develop and implement institutional or agenc ywide IT management controls and capabilities (people, processes, and tools) that help ensure that the vast potential of technology is applied effectively to achieve desired mission outcomes. Among these IT management controls and capabilities are ● enterprise architecture development and use, ● IT investment management, ● system development and acq information security management, and ● IT human capital management. In addition, these organizations establish these controls and capabilities within a governance structure that centralizes leadership in an empowered CIO. These controls and capabilities are interdep IT management disciplines, as shown in figure 2. If effectively established and implemented, they can go a long way in determining how successfully an organization leverages IT to achieve mission goals and outcomes. DHS Is Making Progress but Has Yet to Fully Institutionalize IT Management Controls and Capabilities Over the last 3 years, our work has shown that the department has continued to work to establish effective corporate governance and associated IT management controls and capabilities, but progress in each of the key areas has been uneven, and more remains to be accomplished. Until it fully institutionalizes effective governance controls and capabilities, it will be challenged in its ability to leverage IT to support transformation and mission results. Enterprise Architecture Leading organizations recognize the importance of having and using an enterprise architecture, or corporate blueprint, as an authoritative operational and technical frame of reference to guide and constrain IT investments. In brief, an enterprise architecture provides systematic structural descriptions—in useful models, diagrams, tables, and narrative—of how a given entity operates today and how it plans to operate in the future, and it includes a road map for transitioning from today to tomorrow. Our experience with federal agencies has shown that attempting to modernize systems without having an enterprise architecture often results in systems that are duplicative, not well integrated, unnecessarily costly to maintain, and limited in terms of optimizing mission performance. To assist agencies in effectively developing, maintaining, and implementing an enterprise architecture, we published a framework for architecture management, grounded in federal guidance and recognized best practices. The underpinning of this framework is a five-stage maturity framework outlining steps toward achieving a stable and mature enterprise architecture program. The framework describes 31 practices or conditions, referred to as core elements, that are needed for effective architecture management. We have previously reported on DHS’s effort to develop its enterprise architecture from two perspectives. First, in November 2003, we reported on DHS’s architecture management program relative to the framework described above. At that time, we found that the department had implemented many of the practices described in our framework. For example, the department had, among other things, assigned architecture development, maintenance, program management, and approval responsibilities; created policies governing architecture development and maintenance; and formulated plans to develop architecture products and begun developing them. Second, in August 2004, we reported on DHS’s effort to develop enterprise architecture products, relative to well-established, publicly available criteria on the content of enterprise architectures. At that time, we concluded that the department’s initial enterprise architecture provided a foundation upon which to build, but that it was nevertheless missing important content that limited its utility. Thus, it could not be considered a well-defined architecture. In particular, the content of this initial version was not systematically derived from a DHS or national corporate business strategy; rather, it was more the result of an amalgamation of the existing architectures that several of DHS’s predecessor agencies already had, along with their respective portfolios of system investment projects. To its credit, the department recognized the limitations of the initial architecture and has developed a new version. To assist DHS in evolving its architecture, we recommended 41 actions aimed at having DHS add needed architecture content and ensure that architecture development best practices are employed. Since then, DHS reported that it had taken steps in response to our recommendations. For example, the department issued version 2 of its enterprise architecture in October 2004. According to DHS, this version contained additional business/mission, service, and technical descriptions. Also, this version was submitted to a group of CIOs of major corporations and an enterprise architecture consulting firm, both of which found the architecture meritorious. Earlier this month (March 2006), the department issued another new version of its enterprise architecture, which it calls HLS EA 2006. Our analysis of version 2 of the department’s architecture indicates that DHS has made progress toward development of its architecture products, particularly descriptions of both the “as-is” and “to-be” environments. Specifically, the scope of the “as-is” and “to-be” environments extends to descriptions of business operations, information and data needs and definitions, application and service delivery vehicles, and technology profiles and standards. With respect to the depth and detail of these descriptions (which are the focus of most of our 41 prior recommendations), the department has reported progress, such as (1) completing its first inventory of information technology systems, a key input to its description of the “as-is” environment; (2) establishing departmentwide technology standards; (3) developing and beginning to implement a plan for introducing a shared services orientation to the architecture, particularly with regard to information services (e.g., network, data center, e-mail, help desk, and video operations); and (4) finalizing content for the portion of its architecture that relates to certain border security functions (e.g., the alien detention and removal process that is a major facet of the department’s new Strategic Border Initiative). IT Investment Management Through IT investment management, organizations define and follow a corporate process to help senior leadership make informed decisions on competing options for investing in IT. Such investments, if managed effectively, can have a dramatic impact on performance and accountability. If mismanaged, they can result in wasteful spending and lost opportunities for improving delivery of services. Based on our research, we have issued an IT investment management framework that encompasses the best practices of successful public and private sector organizations, including investment selection and control policies and procedures. Our framework identifies, among other things, effective policies and procedures for developing and using an enterprisewide collection— or portfolio—of investments; using such portfolios enables an organization to determine priorities and make decisions among competing options across investment categories based on analyses of the relative organizational value and risks of all investments. A central tenet of the federal approach to IT investment management is the select/control/evaluate model. During the select phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds and (2) selects those projects that will best support its mission needs. In the control phase, the organization ensures that the project continues to meet mission needs at the expected levels of cost and risks. If the project is not meeting expectations or if problems have arisen, steps are quickly taken to address the deficiencies. During the evaluate phase, actual versus expected results are compared after a project has been fully implemented. In August 2004, we reported that DHS had established an investment management process that included departmental oversight of major IT programs. However, this process was not yet institutionalized: for example, most programs (about 75 percent) had not undergone the departmental oversight process, and resources were limited for completing control reviews in a timely manner. At that time, the CIO and other DHS officials attributed these shortfalls, in part, to the fact that the department’s process was maturing and needed to improve. Based on our findings, we made recommendations aimed at strengthening the process. In March 2005, we again reported on this investment review process, noting that it incorporated many best practices and provided its senior leaders with the information required to make well-informed investment decisions at key points in the investment life cycle. However, we also concluded that at some key investment decision points, DHS’s process did not require senior management attention and oversight. For example, management reviews are not required at key system and subsystem decision points, although such reviews (especially with complex systems that incorporate new technology like US-VISIT) are critical to ensuring that risk is reduced before the organization commits to the next phase of investment. Accordingly, we made further recommendations to improve the process. Further, the CIO recently reported additional steps being taken to strengthen IT investment management. According to the CIO, DHS has ● established an acquisition project performance reporting system, which requires periodic reporting of cost, schedule, and performance measures as well as earned value metrics, as means to monitor and control major acquisitions; ● aligned the investment management cycle and associated milestones with the department’s annual budget preparation process to allow business cases for major investments to be submitted to department headquarters at the same time as the budget, rather than as a follow- on; linked investment management systems to standardize and make consistent the financial data used to make investment decisions; ● verified alignment of approximately $2 billion worth of investments via the department’s portfolio management framework; and ● completed investment oversight reviews (by total dollar value) of over 75 percent of the department’s major investments. The department has also developed a standard template for capturing information about a given IT program to be used in determining the investment’s alignment with the enterprise architecture. Such alignment is important because it ensures that programs will be defined, designed, and developed in a way that avoids duplication and promotes interoperability and integration. However, the department has yet to document a methodology, with explicit criteria, for making its judgments about the degree of alignment. Instead, it relies on the undocumented and subjective determinations of individuals in its Enterprise Architecture Center of Excellence. Systems Development and Acquisition Management Managing systems development and acquisition effectively requires applying engineering and acquisition discipline and rigor when defining, designing, developing and acquiring, testing, deploying, and maintaining IT systems and services. Our work and other best practice research have shown that applying such rigorous management practices improves the likelihood of delivering expected capabilities on time and within budget. In other words, the quality of IT systems and services is largely governed by the quality of the management processes involved in developing and acquiring them. Best practices in systems development and acquisition include following a disciplined life cycle management process, in which key activities and phases of the project are conducted in a logical and orderly process and are fully documented. Such a life cycle proces begins with initial concept definition and continues through requirements determination to design, development, various phases of testing, implementation, and maintenance. For example, expected system capabilities should be defined in terms of requirements for functionality (what the system is to do), performance (how well thesystem is to execute functions), data (what data are needed by what functions, when, and in what form), interface (what interactions with related and dependent systems are needed), and security. Further, system requirements should be unambiguous, consiste with one another, linked (that is, traceable from one source level to another), verifiable, understood by stakeholders, and fully documented. The steps in the life cycle process each have important purposes, and they have inherent dependencies among themselves. Thus, if earlier steps are omitted or deficient, later steps will be affected, resulting in costly and time-consuming rework. For example, a system can be effectively tested to determine whether it meets requirements only if these requirements have already been completely and correctly defined. Concurrent, incomplete, and omitted activities in life cycle management exacerbate the program risks. Life cycle management weaknesses become even more critical as the program continues, because the size and complexity of the program will likely only increase, and the later problems are found, the harder and more costly they will likely be to fix. These steps, practices, and processes are embedded in an effective systems development life cycle (SDLC) methodology, which sets forth the multistep process of developing information systems fro investigation of initial requirements through analysis, design, implementation, maintenance, and disposal. Organizations gen erally formalize their SDLC in policies, procedures, and guidance. Currently, many of the major DHS components are following the processes established under their predecessor organizations. For example, both the Transportation Security Administration and CBP have their own SDLCs. As part of our reviews of DHS IT management and specific IT programs, we have not raised any issues or identified any shortcomings with these SDLCs. DHS is currently drafting policies and procedures to establish a departmentwide SDLC methodology and thus provide a common management approach to systems development and acquisition. According to DHS, the goals of the SDLC are to help ● align projects to mission and business needs and requirements; incorporate accepted industry and government standards, best practices, and disciplined engineering methods, including IT maturity model concepts; ensure that formal reviews are consistent with DHS’s investment management process; and and approvals required by the process institute disciplined life cycle management practices, including planning and evaluation in each phase of the information system cycle. he department’s SDLC, currently in draft form, is to apply to DHS’s T IT portfolio as well as other capital asset acquisitions. Under the SDLC, each program will be expected to, among other things, follow disciplined project planning and management processes balanced by effective management controls; have a comprehensive project management pl ● base project plans on user requirements that are c testable, and traceable to the work products produced; and integrate information security activities throughout the SDLC. Information Security Management Effective information security management depends on establishing a comprehensive program to protect the information and information systems that support an organization’s operations and assets. The overall framework for ensuring the effectiveness of federal information security controls is provided by the Federal Information Security Management Act of 2002. In addition, OMB Circular No. A-130 requires agencies to provide information and systems with protection that is commensurate with the risk and magnitude of the harm that would result from unauthorized acce to these assets or their loss, misuse, or modification. Because of continuing evidence indicating significant, pervasive weaknesses in the controls over computerized federal operations we have designated information security as a governmentwide high - risk issue since 1997. Moreover, related risks continue to escalate, in part because the government is increasingly relying on the Internet and on commercially available IT products. Concerns increasing regarding attacks for the purpose of crime, terrorism, foreign intelligence gathering, and acts of war, as well as by the disgruntled insider, who may not need particular expertise to gai unrestricted access and inflict damage or steal assets. Without an effective security management program, an organization has no assurance that it can withstand these and other threats. Since it was established, both we and the department’s inspector general (IG) have reported that although the department continues to improve its IT security, it remains a major management challenge For example, within its first year the department had appointed a chief information security officer and developed and disseminated information system security policies and procedures, but it had not completed a comprehensive inventory of its major IT systems—a prerequisite for effective security management. . In June 2005, we reported that DHS had yet to effectively implement a comprehensive, departmentwide information security program to protect the information and information systems that support its operations and assets. In particular, although it had developed an documented departmental policies and procedures that could provide a framework for implementing such a program, certain departmental components had not yet fully implemented key information security practices and controls. Examples of weaknesses in components’ implementation included inco missing elements in risk assessments, security plans, and remedial action plans, as well as incomplete, nonexistent, or untested continuity of operations plans. To address these weaknesses, we made recommendations aimed at ensuring that DHS fully impleme the key information security practices and controls. More recently, the DHS IG reported that DHS’s components have not completely aligned their respective information security programs with DHS’s overall policies, procedures, and practic es. However, the IG also reported progress. According to the IG, DHS completed actions to eliminate two obstacles that had significantly impeded the department in establishing its security program: First, i completed the comprehensive system inventory mentioned earlier, including major applications and general support systems for all DHS components. Second, it implemented a departmentwide tool that incorporates the guidance required to adequately complete security certification and accreditation for all systems. The IG als o reported that the CIO had developed a plan to accredit all systems by September 2006. The DHS CIO testified earlier this month (March 2006) on progress in implementing the department’s certification and accreditation plan, stating that the department is well on its way to achieving itsSeptember 2006 target for full system accreditation. The CIO also stated that by the end of February 2006, more than 60 percent of the over 700 systems in its inventory were fully accredited, up from about 26 percent 5 months earlier. IT Human Capital Management A strategic approach to human capital management includes viewing people as assets whose value to an organization can be enhanced by investing in them, and thus increasing both their value and the performance capacity of the organization. Based on our experience with leading organizations, we issued a model encompassing strategic human capital management, in which strategic human capital planning was one cornerstone. Strategic human capital planning enables organizations to remain aware of and be prepared for current and future needs as an organization, ensuring that they have the knowledge, skills, and abilities needed to pursue their missions. We have also issued a set of key practices for effective strategic human capital planning. These practices are generic, applying to any organization or component, such as an agency’s IT organization. They include involving top management, employees, and other stakeholders in developing, communicating, and implementing a strategic workforce plan; ● determining the critical skills and competencies needed to achieve current and future programmatic results; ● developing strategies tailored to address gaps between the current workforce and future needs; ● building the capability to support workforce strategies; and ● monitoring and evaluating an agency’s progress toward its human capital goals and the contribution that human capital results have made to achieving programmatic goals. In June 2004, we reported that DHS had begun strategic planning for IT human capital at the headquarters level, but it had not yet systematically gathered baseline data about its existing workforce. Moreover, the DHS CIO expressed concern over staffing and acknowledged that progress in this area had been slow. In our report, we recommended that the department analyze whether it had appropriately allocated and deployed IT staff with the relevant skills to obtain its institutional and program-related goals. In response, DHS stated that on July 30, 2004, the CIO approved funding for an IT human capital Center of Excellence. This center was tasked with delivering plans, processes, and procedures to execute an IT human capital strategy and to conduct an analysis of the skill sets of DHS IT professionals. Since that time, DHS has undertaken a departmentwide human capital initiative, MAXHR, which is to provide greater flexibility and accountability in the way employees are paid, developed, evaluated, afforded due process, and represented by labor organizations. Part of this initiative involves the development of departmentwide workforce competencies. According to the DHS IG, the department intended to implement MAXHR in the summer of 2005, but federal district court decisions have delayed the department’s plans. However, the IG stated that the classification, pay, and performance management provisions of the new program are moving forward, with implementation of the new performance management system beginning in October 2005. According to the IG, the new pay system is planned for implementation by January 2007 for some DHS components. CIO Leadership According to our research on leading private and public sector organizations and experience at federal agencies, leading organizations adopt and use an enterprisewide approach to IT governance under the leadership of a CIO or comparable senior executive, who has responsibility and authority, including budgetary and spending control, for IT across the entity. In May 2004, we reported that the DHS CIO did not have authority and control over departmentwide IT spending. Control over the department’s IT budget was vested primarily with the CIO organizations within each DHS component, and the components’ CIO organizations were accountable to the heads of the components. As a result, DHS’s CIO did not have authority to manage IT assets across the department. Accordingly, we recommended that the Secretary examine the sufficiency of spending authority vested in the CIO and take appropriate steps to correct any limitations in authority that constrain the CIO’s ability to effectively integrate IT investments in support of departmentwide mission goals. Since then, the DHS IG has reported that the DHS CIO is not well positioned to accomplish IT integration objectives. According to the IG, despite federal laws and requirements, the CIO is not a member of the senior management team with authority to strategically manage departmentwide technology assets and programs. The IG reported that steps were taken to formalize reporting relationships between the DHS CIO and the CIOs of major component organizations, but that the CIO still does not have sufficient staff resources to assist in carrying out the planning, policy formation, and other IT management activities needed to support departmental units. The IG expressed the view that although the CIO currently participates as an integral member at each level of the investment review process, the department would benefit from following the successful examples of other federal agencies in positioning their CIOs with the authority and influence needed to guide executive decisions on departmentwide IT investments and strategies. In response to the IG’s comments, the DHS CIO stated that his office is properly positioned and has the authority it needs to accomplish its mission. According to the CIO, the office is the principal IT authority to the Secretary and Deputy Secretary, and it will continue to hold that leadership role within the department. DHS Is Making Some Progress in Implementing IT Systems and Infrastructure A gauge of DHS’s progress in managing its IT investments is the extent to which it has deployed and is currently operating more modern IT systems and infrastructure. To the department’s credit, our reviews have shown progress in these areas, and DHS has reported other progress. However, our reviews have also shown that IT programs have not met stated goals for deployed capabilities, and DHS’s own reporting shows that infrastructure goals have yet to be fully met. To expedite the implementation of IT systems, the department has developed and deployed system capabilities incrementally, which we support, as this is a best practice and consistent with our recommendations. For example, the department has successfully delivered visitor entry identification and screening capabilities with the first three increments of its US-VISIT program, and it is currently implementing release four of its ACE program. At the same time, however, US-VISIT exit capabilities are not in place, and release four of ACE does not include needed functionality. Further, some IT programs that either were or have been under way for years have not delivered any functionality, such as the canceled ASI program and the Secure Flight program. In addition, the department has recently reported a number of accomplishments relative to IT infrastructure; however, what has been reported also shows that much remains to be accomplished before infrastructure-related efforts produce deployed and operational capabilities. For example, the department reports that it has begun its Infrastructure Transformation Program (ITP), which is its approach to moving to a consolidated, integrated, and services- oriented IT infrastructure. According to the department, the CIO developed and has begun implementing the ITP plan, which is to be centrally managed but executed in a distributed manner, with various DHS components taking the lead for different areas of infrastructure transformation. The ITP is to create a highly secure and survivable communications network (OneNet) for Sensitive but Unclassified data across the department, and it is also to establish a common and reliable e-mail system across the department. The department reported that it had deployed the initial core of the DHS OneNet and built the primary Network Operation Center to monitor OneNet performance. Among the other goals of the program are consolidated data centers to reduce costs and provide a highly survivable and reliable computing environment. In this regard, the department reported that it has now established an interim data center. In addition, the department stated that it has extended its classified networking capabilities by fielding 56 Secret sites on the department’s Homeland Secure Data Network and by completing the connection of this network to SIPRNet (the Defense Department’s Secret Internet Protocol Routed Network). DHS also reported that it has established an Integrated Wireless Program Plan, which provides a program management framework to ensure the on-time cost and schedule performance of wireless programs and projects. Key IT Programs Reflect Mixed Use of Effective IT Management Practices A key measure of how well an organization is managing IT is the degree to which its IT-dependent programs actually implement corporate management controls and employ associated best practices. In this regard, our reviews of several nonfinancial DHS IT programs provide examples of both strengths and weaknesses in program management. In summary, they show that DHS IT programs are not being managed consistently: some programs are at least partially implementing certain program management best practices, but others are largely disregarding most of the practices. Further, they show that most of the programs are considerably challenged in certain key areas, such as measuring progress and performance against program commitments and establishing human capital capabilities. IT investment alignment with the enterprise architecture. An important element of enterprise architecture management is ensuring that IT investments comply with the architecture. However, in several of the programs that we have reviewed, investments have been approved without documented analysis to support these judgments and to permit the judgments to be independently verified. For example, DHS approved the ACE program’s alignment with the department’s architecture on the recommendation of its Enterprise Architecture Center of Excellence and Enterprise Architecture Board. However, the Center’s evaluators did not provide a documented analysis that would allow independent verification. According to DHS officials, they do not have a documented methodology for evaluating programs’ architecture compliance, and instead rely on the professional expertise of Center staff. In contrast, the ASI program provides an example of an instance in which the reviews required to ensure architecture alignment resulted in the discovery of a significant problem: the program had not adequately defined its relationships and dependencies with other department programs. As a result, the program was reconsidered and later subsumed within the new Secure Border Initiative, the department’s broader strategy for border and interior enforcement. Reliable cost estimates. Reliable cost estimates are prerequisites both for developing an economic justification for a program and for establishing cost baselines against which to measure progress. DHS IT programs that we reviewed have demonstrated mixed results in this regard. For example, the ACE program has made considerable progress in implementing our recommendation to ensure that its development contractor’s cost estimates are reconciled with independent cost estimates, and that the derivation of both estimates is consistent with published best practices. However, cost estimating remains a major challenge for other DHS IT programs. For example, Secure Flight did not have cost estimates for either initial or full operating capability, nor did it have a life-cycle cost estimate (estimated costs over the expected life of a program, including direct and indirect costs and costs of operation and maintenance). Also, for the US-VISIT program’s analysis of proposed alternatives for monitoring the exit of travelers, cost estimates did not meet key criteria for reliable cost estimating as established in the published best practices mentioned above. For example, they did not include detailed work breakdown structures defining the work to be performed, so that associated costs could be identified and estimated. Such a work breakdown structure provides a reliable basis for ensuring that estimates include all relevant costs. Without reasonable cost estimates, it is not possible to produce an adequate economic justification for choosing among alternatives, and program performance cannot be adequately measured. Earned value management. To help ensure that reliable processes are used to measure progress against cost and schedule commitments, OMB requires agencies to manage and measure major IT projects through use of an earned value management (EVM) system that is compliant with specified standards. On programs we reviewed, however, the use of EVM was as yet limited. For example, although the ACE program had instituted the use of EVM on recent releases, its use for one release was suspended in June 2005, because staff assigned to the release were unfamiliar with the technique. For another release, EVM was not used because, according to program officials, the release had not established the necessary cost and schedule baseline estimates against which earned value could be measured. ACE officials told us that they plan to establish baselines and use EVM for future work. With regard to the US-VISIT program, although EVM is to be used in managing the prime integration contract, it has not been used in a number of US- VISIT related contracts over the last 3 years. According to DHS, in fiscal year 2005, 30 percent of departmental programs were using EVM. Performance management and accountability. To ensure that programs manage their performance effectively, it is important that they define and measure progress against program commitments and hold themselves accountable for results. These program commitments include expected or estimated (1) capabilities and associated use and quality; (2) benefits and mission value; (3) costs; and (4) milestones and schedules. To be accountable, projects need first to develop and maintain reliable and current expectations and then to define and select metrics to measure progress against these. However, in our reviews of DHS programs (such as those that are required to prepare expenditure plans for Senate and House appropriations subcommittees before obligating funding), we have reported that program performance and accountability has been a challenge. For example, the fiscal year 2004 expenditure plan for the Atlas program did not provide sufficient information on program commitments to allow the Congress to perform effective oversight. On the other hand, although the ACE program office is still not where it needs to be in this regard, it has made progress in this area: it has now prepared an initial version of a program accountability framework that includes measuring progress against costs, milestones and schedules, and risks for select releases. However, ACE benefit commitments are still not well defined, and the performance targets being used were not always realistic. On other programs, such as SEVIS, we found that while some performance aspects of the system were being measured, others were not such as network usage. Disciplined acquisition and development processes. Our reviews of DHS programs have disclosed numerous weaknesses in key process areas related to system acquisition and management, such as requirements development and management, test management, project planning, validation and verification, and contract management oversight. For example, we reported that the Atlas program office, which had been recently established, had not yet implemented any of these key process areas. For the ACE program, weaknesses in requirements definition were a major reason for recent problems and delays, including the realization during pilot testing that key functionality had not been defined and built into the latest release. For US-VISIT, test plans were incomplete in that they did not, among other things, adequately demonstrate traceability between test cases and the requirement to be verified by testing. Also, both ASI and Secure Flight were proceeding without complete and up-to-date program management plans, and Secure Flight’s requirements were not well developed. In addition, key ASI acquisition controls, such as contract management oversight, were not yet defined. This led to a number of problems in ASI deploying, operating, and maintaining ISIS technology. Further, ACE and US- VISIT projects have not always effectively employed independent verification and validation. Risk management. Effective risk management is vital to the success of any system acquisition. Accordingly, best practices advocate establishing management structures and processes to proactively identify facts and circumstances that can increase the probability of an acquisition’s failing to meet cost, schedule, and performance commitments and then taking steps to reduce the probability of their occurrence and impact. Our work on the ACE, US-VISIT, and ASI programs, for example, showed that risk management programs were in place, but not all risks were being effectively addressed. In particular, key risks on the ACE program were not being effectively addressed. Specifically, the ACE program schedule had introduced significant concurrency in the development and deployment of releases; as both prior experience on the ACE program and best practices show, such concurrency causes contention for common resources, which in turn produces schedule slips and cost overruns. Also, the ACE program was passing key milestones with known severe system defects—that is, allowing development to proceed to the next stage even though significant problems remained to be solved. This led to a recurring pattern of addressing quality problems with earlier releases by borrowing resources from future releases, which led to schedule delays and cost overruns. Moreover, it led the program to deploy one release prematurely with the intention of gaining user acceptance sooner. However, this premature deployment actually produced a groundswell of user complaints and poor user satisfaction scores with the release. Similar risks were experienced on the Coast Guard’s Rescue 21 program. For example, we reported that the Coast Guard’s plan to compress and overlap key tests introduced risks, and subsequently the Coast Guard decided to postpone several tests. Security. The selection and employment of appropriate security and privacy controls for an information system are important tasks that can have major implications for the operations and assets and for the protection of personal information that is collected and maintained in the system. Security controls are the management, operational, and technical safeguards prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information. Privacy controls limit the collection, use, and disclosure of personal information. For several IT programs, security and privacy has been a challenge. For example, we reported in September 2003 and again in May 2004 that the US-VISIT program office had yet to develop a security plan as required by OMB and other federal guidance, although the program later developed a plan that was generally consistent with applicable guidance. However, the program office had not conducted a security risk assessment or included in the plan when such an assessment would be completed. OMB and other federal guidance specifies that security plans should describe the methodology that is used to identify system threats and vulnerabilities and to assess the risks, and include the date the assessment was completed. In addition, we reported that the Atlas program was relying on a bureauwide security plan that did not address Atlas infrastructure requirements. Further, Atlas had yet to develop a privacy impact assessment to determine what effect, if any, the system would have on individual privacy, the privacy consequences of processing certain information, and alternatives considered to collect and handle the information. On TSA’s Secure Flight program, although the agency had taken steps to implement security to protect system information and assets, we recently reported that these steps were individually incomplete and collectively fell short of a comprehensive program consistent with federal guidance and associated best practices. More specifically, OMB and other federal guidance and relevant best practices call for agencies to, among other things, (1) conduct a systemwide risk assessment that is based on system threats and vulnerabilities and (2) then develop system security requirements and related policies and procedures that govern the operation and use of the system and address identified risks. Although TSA developed two system security plans—one for the underlying infrastructure (hardware and software) and another for the Secure Flight system application—neither was complete. Specifically, the infrastructure plan only partially defined the requirements to address the risks, and the application plan did not include any requirements addressing risks. Furthermore, we also recently reported that TSA did not fully disclose to the public, as required by privacy guidance, its use of personal information during the testing phase of Secure Flight until after many of the tests had been completed. Establishing and maintaining adequate staffing. Implementing the IT management processes that I have been describing requires that programs have the right people—not only people who have the right knowledge, skills, and abilities, but also enough of them to do the job. Generally, all the programs we reviewed were challenged, particularly in their initial stages, to assemble sufficient staff with the right skill mix and to treat workforce (human capital) planning as a management imperative. For example, we reported that both the Atlas and the ASI programs were initiated without being adequately staffed. In addition, in September 2003 we reported that the US-VISIT program office had assessed its staffing needs for acquisition management at 115 government and 117 contractor personnel, but that at the time the program had 10 staff within the program office and another 6 staff working closely with them. Since then, US-VISIT has filled 102 of its 115 planned government positions (with plans in place to fill the remaining positions) and all of its planned 117 contractor positions. However, to ensure that staffing needs continue to be met, organizations need to manage human capital strategically, which entails identifying the program functions that need to be performed and the associated numbers and skill sets (core competencies) needed to perform them, assessing the on-board workforce relative to these needs, identifying gaps, and developing and implementing strategies (i.e., hiring, retention, training, contracting) for filling these gaps over the long-term. In this regard, the US-VISIT program has made considerable progress. Specifically, we recently reported that it has analyzed the program office’s workforce to determine diversity trends, retirement and attrition rates, and mission-critical and leadership competency gaps, and it has updated the program’s core competency requirements to ensure alignment between the program’s human capital and business needs. In contrast, although the ACE program has taken various informal steps to bolster its workforce (such as providing training), it has been slow to document and implement a human capital strategy that compares competency-based staffing needs to on-board capabilities and includes plans for closing shortfalls. In closing, let me reiterate that we have made a series of recommendations to the department aimed at addressing both the department’s institutional IT management challenges and its IT program-specific weaknesses. To the department’s credit, it has largely agreed with these recommendations. Although some of these have been implemented, most are still works in process. In my view, these recommendations provide a comprehensive framework for strengthening DHS’s management of IT and increasing the chances of delivering promised system capabilities and benefits on time and within budget. We look forward to working constructively with the department in implementing these recommendations and thereby maximizing the role that IT can play in DHS’s transformation efforts. Mr. Chairmen, this concludes my statement. I would be happy to answer any questions at this time. Contacts and Acknowledgments For future information regarding this testimony, please contact Randy Hite, Director, Information Technology Architecture and Systems Issues, at (202) 512-3439, or hiter@gao.gov. Other individuals who made key contributions to this testimony were Mathew Bader, Mark Bird, Justin Booth, Barbara Collier, Deborah Davis, Michael Holland, Ash Huda, Gary Mountjoy, and Scott Pettis.
Information technology (IT) is a critical tool for the Department of Homeland Security (DHS), not only in performing its mission today, but also in transforming how it will do so in the future. In light of the importance of this transformation and the magnitude of the associated challenges, GAO has designated the implementation of the department and its transformation as high risk. GAO has reported that in order to effectively leverage IT as a transformation tool, DHS needs to establish certain institutional management controls and capabilities, such as having an enterprise architecture and making informed portfolio-based decisions across competing IT investments. GAO has also reported that it is critical for the department to implement these controls and associated best practices on its many IT investments. In its past work, GAO has made numerous recommendations on DHS institutional controls and on individual IT investment projects. The testimony is based on GAO's body of work in these areas, covering the state of DHS IT management both on the institutional level and the individual program level. DHS continues to work to institutionalize IT management controls and capabilities (disciplines) across the department. Among these are (1) having and using an enterprise architecture, or corporate blueprint, as an authoritative frame of reference to guide and constrain IT investments; (2) defining and following a corporate process for informed decision making by senior leadership about competing IT investment options; (3) applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; (4) establishing a comprehensive information security program to protect its information and systems; (5) having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; and (6) centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer. Over the last 3 years, the department has made efforts to establish and implement these IT management disciplines, but it has more to do. Despite progress, for instance, in developing its enterprise architecture and its investment management processes, much work remains before these and the other disciplines are fully mature and institutionalized. For example, although the department recently completed a comprehensive inventory of its major information systems--a prerequisite for effective security management--it has not fully implemented a comprehensive information security program, and its other institutional IT disciplines are still evolving. The department also has more to do in deploying and operating IT systems and infrastructure in support of core mission operations, such as border and aviation security. For example, a system to identify and screen visitors entering the country has been deployed and is operating, but a related exit capability largely is not. Also, a government-run system to prescreen domestic airline passengers is not yet in place. Similarly, some infrastructure has been delivered, but goals related to consolidating networks and e-mail systems, for example, remain to be fully accomplished. Similarly, GAO's review of key nonfinancial systems show that DHS has more to do before the IT disciplines discussed above are consistently employed. For example, these programs have not consistently employed reliable cost estimating practices, effective requirements development and test management, meaningful performance measurement, strategic workforce management, and proactive risk management, among other recognized program management best practices. Until the department fully establishes and consistently implements the full range of IT management disciplines embodied in best practices and federal guidance, it will be challenged in its ability to manage and deliver programs.
Background SBA Disaster Loan Program According to SBA officials, when a disaster is declared in an area, a staff member from an SBA field operations center, located in Atlanta, Georgia, or Sacramento, California, contacts the area’s SBDC network to identify a site for setting up a business recovery center, which may be the local SBDC office. Officials added that SBDC staff members co-locate in a business recovery center, when possible, so business loan applicants can access SBDC services at the center. Additionally, SBA officials said that SBDCs help SBA by doing the following: conducting local outreach to disaster victims, assisting declined business loan applicants or applicants who have withdrawn their loan applications with applications for reconsideration or re-acceptance, assisting declined applicants in remedying issues that initially precluded loan approvals, and providing business loan applicants with technical assistance, including helping businesses reconstruct business records, helping applicants better understand what is required to complete a loan application, compiling financial statements, and collecting required documents. SBA offers two types of disaster loans for businesses: (1) Physical Disaster Loans, which help replace damaged property or restore property to pre-disaster condition, and (2) Economic Injury Disaster Loans, which provide working capital to help small businesses survive until normal operations resume after a disaster. See table 1 for additional details of both types of disaster business loans. SBA has divided the disaster loan process into three steps: application, verification and loan processing, and closing, as shown in figure 1. This report focuses on step 1, the loan application process. Physical Disaster Loan applicants have 60 days and Economic Injury Disaster Loan applicants have 9 months from the date of disaster declaration to apply for a loan. Disaster victims may apply for a disaster business loan through the Electronic Loan Application online portal or by paper submission. The information from online and paper applications is loaded into SBA’s Disaster Credit Management System, which is the system SBA uses to process loan applications and make determinations for its disaster loan program. Paperwork Reduction Act The Paperwork Reduction Act seeks to “ensure the greatest possible public benefit from and maximize the utility of information created, collected, maintained, used, shared, and disseminated by or for the Federal Government.” A collection of information, such as forms, includes a request for information from 10 or more persons to be submitted to the federal government. The act requires agencies to establish a process for evaluating and approving collections of information. The act created the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (OMB) to perform all Paperwork Reduction Act functions. As part of the review process for a collection of information, OMB’s director must determine whether or not an agency’s proposed collection of information should be approved for public use. The director may approve a collection of information for a maximum of 3 years. Agencies are required to renew information collection forms before expiration to maintain a valid OMB control number. In addition to the process requirement, the act includes broader requirements, including that agencies reduce information collection burdens on the public, ensure that the public has timely and equitable access to public information, and improve information technology practices to reduce burden. SBA Disaster Loan Application Forms SBA’s disaster business loan application forms are examples of a collection of information, so each form must be approved by OMB. SBA requires all disaster victims in need of a disaster business loan to submit the applicable loan application forms. To apply for a loan, a disaster victim must complete the required disaster business loan application forms (see app. II) and relevant additional forms: SBA Form 5 – Disaster Business Loan Application; OMB Control Number 3245-0017; expiring on January 31, 2018 (see app. II, fig. 5). SBA Form 413D – Personal Financial Statement; OMB Control Number 3245-0188; expiring on January 31, 2018 (see app. II, fig. 6). SBA Form 1368 – Additional Filing Requirements Economic Injury Disaster Loan and Military Reservist Economic Injury Disaster Loan; OMB Control Number 3245-0017; expiring on January 31, 2018 (see app. II, fig. 7). SBA Form 159D – Fee Disclosure Form and Compensation Agreement; OMB Control Number 3245-0201; expiring on October 31, 2017 (see app. II, fig. 8). Internal Revenue Service (IRS) Form 4506-T – Request for Transcript of Tax Return; OMB Control Number 1545-1872; expiring on December 31, 2016 (see app. II, fig. 9). Figure 2 shows that, according to SBA, in fiscal year 2014 it took a disaster business loan applicant approximately 4.5 hours to complete all the relevant forms, including gathering required documentation such as the most recent tax return. SBA Generally Meets Paperwork Reduction Act Form Renewal Requirements through Its Clearance Process and Internal Controls SBA’s Form Renewal Process Generally Meets Paperwork Reduction Act Requirements Consistent with the Paperwork Reduction Act’s requirement that agencies establish a review process, SBA’s Records Management Division oversees SBA’s Paperwork Reduction Act Clearance Process, which is documented in a standard operating procedure (SOP). The Paperwork Reduction Act requires an agency to establish a process to evaluate an information collection. SBA’s 2006 Forms Management Program, SOP 00 30 3, provides written operating procedures for the agency’s Paperwork Reduction Act clearance process. Figure 3 shows the overall process. First, Records Management Division officials identify which SBA forms will expire in 6 months, determine the program office responsible for each form, and issue an expiration memorandum to the program office with a timeline to complete the Paperwork Reduction Act clearance process. The Records Management Division uses the Regulatory Information Service Center and Office of Information and Regulatory Affairs Consolidated Information System, OMB’s electronic system, to identify which collections of information, such as loan application forms, will expire in 6 months. Next, Records Management Division officials said they solicit public comments and coordinate with the relevant program offices as well as Office of General Counsel (OGC) and Office of Inspector General (OIG) to evaluate the information collection. After the internal reviews are completed, the Records Management Division submits the Paperwork Reduction Act submission package to OMB electronically and concurrently posts a 30-day Federal Register notice seeking public comment on the proposed collection of information. Following this comment period, OMB notifies SBA whether it has approved SBA’s Paperwork Reduction Act submission package and, if approved, provides the OMB control number and expiration date for the approved forms. SBA’s requirement that program offices complete and sign routing forms aligns with Paperwork Reduction Act requirements that the head of an agency certify that the information collection is necessary for the proper performance of the agency’s functions. The Records Management Division coordinates the program office reviews. For example, ODA oversees the renewal of Forms 5 and 1368. Two forms, OMB Form 83-I and SBA’s Form 58, document that program offices involved in the collection of information have reviewed and approved the Paperwork Reduction Act submission package. The first form is OMB Form 83-I, which is OMB’s Paperwork Reduction Act submission form that an agency uses to certify that it is in compliance with the Paperwork Reduction Act requirements by identifying the type and purpose of the collection information being submitted to OMB. Form 83-I also includes a supporting statement, which justifies the necessity of a collection of information and use of statistical methods to reduce burden, if applicable. The second form is SBA’s Form 58, which is SBA’s Record of Clearance and Approval that tracks the completion of SBA’s Paperwork Reduction Act Clearance Process. SBA program offices responsible for the renewal of the collection of information, such as ODA, complete these forms and document approval by the necessary SBA stakeholders. SBA’s requirement that OGC review and provide feedback on each Paperwork Reduction Act submission package before it is sent to OMB conforms to the act’s requirements for independent review of the information collection. The Paperwork Reduction Act requires an agency to establish an independent process that evaluates whether or not the Paperwork Reduction Act submission package meets the act’s requirements. SBA Form 58 allows the involved program offices and OGC to review the Paperwork Reduction Act submission package and verify its compliance with the act’s requirements. OGC officials stated that they review OMB Form 83-I, supporting statements, and applicable information collection instruments to ensure compliance with Paperwork Reduction Act requirements. OGC also determines whether recent statutory, regulatory, or other changes are reflected in the collection of information and described in the supporting statement document. OIG also reviews and comments on each Paperwork Reduction Act submission package before it is sent to OMB. According to OIG officials, their comments are intended to make the program stronger and address difficulties encountered in criminal, civil, or administrative matters that changes to the form could help avoid in the future. OIG officials said they also review for clarity and check for issues and deficiencies that OIG has identified in prior audits. OIG officials stated that they do not sign the SBA Form 58 to preserve their independence and not create a perception that the OIG endorses a program office’s document. According to SBA, the agency’s policy to have both OGC and OIG review and have OGC sign- off on a Paperwork Reduction Act submission package helps the agency not only to achieve its goal of getting OMB to approve the package prior to the expiration of a disaster business loan form, but also to identify and address risks or areas of noncompliance. SBA Has Internal Controls to Monitor Compliance with Form Renewal Process Consistent with applicable federal internal controls, SBA has a monitoring system to identify and remedy deficiencies in the Paperwork Reduction Act clearance process. Federal internal control standards state that management should establish and operate activities to monitor controls and that management should also remediate identified internal control deficiencies on a timely basis. According to SBA officials, as part of its monitoring system, if a program office is nonresponsive or fails to meet the timelines outlined in its expiration memorandum, then the Records Management Division may elevate the issue to the Chief Operating Officer, who would contact the program office’s Associate or Assistant Administrator to address the matter. Records Management Division officials also said that they may obtain from OMB an extension of an expiration date so that a collection of information is not out of compliance with the Paperwork Reduction Act. We found that SBA’s disaster business loan application forms generally include the required elements, such as having an OMB control number, valid expiration date, an estimate of how long it will take to complete the form, and a statement notifying applicants that they are not required to respond to the request for information if the form does not display a valid OMB control number. We also observed from a demonstration that the Electronic Loan Application disaster business loan application forms include the OMB control number, expiration date, and a disclaimer that if the OMB control number is missing, an applicant is not required to complete the forms. Although SBA’s disaster business loan forms generally are in compliance with these Paperwork Reduction Act requirements, we identified three instances of noncompliance. Consistent with OMB’s finding in its 2014 Information Collection Budget Report, we found an instance of noncompliance when SBA’s Form 159D – Fee Disclosure and Compensation Agreement was not submitted for OMB’s review prior to its expiration date. SBA was aware of this violation, and to address it, Records Management Division officials said that the program office responsible for Form 159D instituted personnel and operational changes, including weekly reviews of pending expirations, progress reports for renewals in progress, and designation of individuals accountable for Paperwork Reduction Act compliance. In addition, we found that SBA Form 159D – Fee Disclosure and Compensation Agreement did not have an expiration date and SBA Form 413D – Personal Financial Statement did not include a statement informing applicants that they do not have to complete a form that does not display a valid OMB control number. SBA officials corrected these forms in October 2016. SBA’s Records Management Division officials told us that it documents any deficiencies in a memorandum to the appropriate program office and directs that office to remedy the issue in consultation with other program offices, if necessary. Records Management Division officials also said that the program office resubmits the collection of information to the division. Records Management Division officials added that no standard time frames have been set for remediation because timing depends on when the deficiency is identified and the nature and complexity of the issue. In addition to the Paperwork Reduction Act clearance process and the related monitoring system, SBA has feedback mechanisms in place. The federal internal control standards state that management should use quality information to achieve its objectives. SBA uses two survey instruments to solicit customer feedback from a sample of business loan applicants and recipients (see app. III). According to SBA officials, SBA uses the surveys, both administered by phone, to solicit input and suggestions for improvements. First, SBA contracts with a third-party group to administer the American Customer Satisfaction Index (ACSI) survey. Second, SBA’s Customer Service Center conducts its own customer satisfaction survey to solicit feedback from selected business loan applicants and recipients, including suggestions for improving the disaster loan process. One question from the Customer Service Center questionnaire asks, “Based on your experience with the SBA, do you have any suggestions for making the process easier?” According to SBA officials, the collected suggestions undergo multiple levels of consideration within SBA’s Continuous Improvement Process Board and ODA’s Associate Administrator decides whether to implement a recommendation. In addition to receiving suggestions from loan applicants, the Continuous Improvement Process Board receives suggestions from SBA employees. SBA Plans to Continue Streamlining the Process, but Could Do More to Integrate and Clarify Available Information Resources SBA Has Implemented Some Actions and Has Planned Others Intended to Reduce Burdens on Loan Applicants Recent and planned actions for the disaster loan program, described in SBA’s Fiscal Year 2015 Annual Performance Report, have focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities (see table 2). According to the report, SBA’s objectives with respect to disaster assistance are to deploy its resources quickly, effectively, and efficiently in a manner that preserves jobs and helps small businesses return to operation. The actions SBA has taken or plans to implement are intended to achieve these objectives. Many of SBA’s recent and planned changes to the disaster loan program described in its 2015 performance report incorporate various leading practices intended to reduce paperwork burdens. We reviewed and identified leading practices from the Hurricane Sandy Rebuilding Task Force Report (2013), an OMB memorandum to agency heads (June 2012), and the Small Business Paperwork Relief Task Force Reports (2003, 2004). These materials note the following leading practices, among others: Separating application tracks for business and home disaster loans: As we previously reported, SBA implemented separate tracks in October 2013. Expediting approval of loan applications that meet minimum credit score and other requirements: SBA revised its disaster loan program regulations in April 2014 so that SBA now may consider an applicant’s credit rather than business cash flow in assessing the applicant’s repayment ability. Using electronic communication and “fillable fileable” forms: SBA introduced the online application capability in August 2008, where loan applicants can complete and submit forms online, and SBA is currently updating the system with more features. Using “smart” electronic forms to assure data submitted meets information system requirements: SBA’s online application portal includes system checks that ensure information entries meet formatting requirements. The portal also provides notices specifying formatting requirements. Further, SBA has reported that increased use of electronic loan applications has reduced errors and loan-processing times. SBA has a dedicated web portal for disaster loan assistance (available at https://disasterloan.sba.gov/ela/, see app. IV, fig. 12) where disaster victims can apply for a loan online and check on the status of a loan application. According to SBA officials, recent enhancements to the web portal include a feature that allows a loan applicant to check the status of an application, including the application’s relative place in the queue for loan processing. The web portal also includes a frequently asked questions page, phone and email contacts to SBA customer service, and links to other SBA information resources. An SBA official we interviewed explained that information from electronic applications is imported directly into the Disaster Credit Management System, instead of SBA staff manually entering information from paper applications into the system. As a result, SBA officials said, the agency has reduced the likelihood of errors in loan applications, reduced follow-up contacts with loan applicants, and expedited loan processing. SBA Could Better Integrate Consistent Information about the Disaster Business Loan Process into Its Web Portal and More Fully Define Loan Terminology Disaster Loan-Related Information Is Not Easily Accessible SBA has published several written and electronic resources about the disaster loan process, but much of this information is not easily accessible to loan applicants and SBA’s resource partners from the disaster loan assistance web portal. Available resources include the following: The disaster business loan application form (Form 5, see app. II, fig. 1) lists documents required for a loan application along with additional information that may be necessary for a decision on the application. SBA’s Fact Sheet for Businesses of All Sizes (see app. IV, fig. 13) provides information about disaster business loans, including estimated time frames, in a question-and-answer format. For example, the fact sheet answers questions concerning collateral requirements of disaster loans, information that must be submitted for a loan, and the amount of time an applicant might expect to wait before the application is approved or denied. The 2015 Reference Guide to the SBA Disaster Loan Program and three-step process flier (see app. IV, fig. 14 and 15) set out the three steps of the loan process, required documents, and estimated time frames. SBA’s online Partner Training Portal provides disaster-loan-related information and resources for SBDCs (available at https://www.sba.gov/ptp/disaster; see app. IV, fig. 16). The training portal includes two videos—one explaining the disaster loan process and the other explaining the disaster assistance program—and three documents that provide information on disaster preparedness, types of disaster loans, and loan procedures. However, SBA has not effectively integrated these resources into its disaster loan assistance web portal, as much of this information is not easily accessible from the web portal’s launch page. The federal Guidelines for Improving Digital Services state that an agency should (i) integrate its digital presence into its overall customer experience strategies, and (ii) publish information in ways that make it easy to find, access, share, distribute, and repurpose. Additionally, the Paperwork Reduction Act has a broad requirement that an agency disseminate information in a manner that is efficient, effective, and economical. SBA’s web portal appears to serve as a one-stop shop where disaster victims can apply for and access more information about loans, among other things. However, when a user clicks on the “General Loan Information” link in the web portal, it routes the user back to SBA’s main website, and the web page featuring loan-related information contains a menu of additional links. In particular, and as shown in figure 4, to access the fact sheet, the reference guide, and the three-step process flier, a site user may click on three successive links and then select from a menu of 15 additional links. Among the latter group of 15 links, the link for Disaster Loan Fact Sheets contains further links to five separate fact sheets for various types of loans. Similarly, to access the reference guide or the three-step flier, the user must click on the Disaster Policies and Procedures link, which is 1 of 15 available link selections. In addition, key disaster loan information resources are not integrated into SBA’s Partner Training Portal and SBDCs were unaware of key resources. As mentioned earlier, SBA’s performance reporting indicates that the agency shared the reference guide with resource partners. Most SBDCs we interviewed were aware that SBA was promoting online applications through the web portal and had assisted disaster victims in completing online applications. However, at least half of the SBDCs we interviewed were not aware of additional information resources. Among the eight SBDCs we interviewed, four SBDCs were not aware of SBA’s three-step process flier and five SBDCs were not aware of the Partner Training Portal. Additionally, the Partner Training Portal does not include the fact sheet, the reference guide, or the loan process flier. SBA officials said that SBDCs that have not experienced a declared disaster in recent years may not be aware of more recently developed information resources because those SBDCs would not have encountered the need for them. However, two of the four SBDCs we interviewed that were not aware of the three-step process flier experienced a disaster during or after 2014—which, according to SBA, was when the flier was created. Although SBA has created and posted key disaster loan information online, these efforts are not effectively integrated in a way that helps users efficiently find needed information following a disaster. According to SBA officials, SBA plans to incorporate updated information from the three-step process flier on the Electronic Loan Application, but it does not have a time frame for specific improvements. SBA officials also said that disaster-loan information and resources are not prominently located on SBA’s website because of the website’s layout and space constraints arising from the agency’s other programs and priorities. However, officials said that the website’s launch page includes a banner section that prominently features recent news, including information related to major disasters. But, SBA officials added that any information displayed on the banner is temporary. Absent better integration of disaster business loan- related resources into SBA’s web portal and streamlined access to business loan-related resources on SBA’s website and its Partner Training Portal, loan applicants—and SBDCs assisting disaster victims— may not be aware of key information for completing disaster business loan applications. Disaster-Related Resources Do Not Consistently Feature Key Information The contents of SBA’s disaster-loan-related resources do not consistently feature key information about (1) the three-step loan process, (2) documentation requirements and reasons for requiring such information, and (3) estimated time frames for the loan process. Each resource includes some of the information; however, none of the resources provide all of the information and none include reasons or explanations for documentation requirements (see table 3). We found that business loan applicants reported confusion to SBDCs about the overall loan process, required documentation, and time frames, and inconsistent information from SBA may have contributed to these issues. For example, according to SBDCs we interviewed as well as responses to SBA and the ACSI surveys, some business loan applicants found the loan process and required documentation confusing for the following reasons: Inconsistent information about loan application process. According to the SBA 2015 Performance Report, SBA uses a “three- step process” communications strategy to provide a consistent message to the public in promoting the disaster loan application process. However, as previously mentioned, not all SBA disaster- related resources include information about the three-step process and the consistency of information varies among information resources (see table 3). Moreover, three SBDCs told us that business loan applicants felt SBA did not clearly communicate parts of the process involved in applying for disaster business loans. For example, according to two of the three SBDCs, there were instances when applicants who applied to SBA for a disaster business loan were told to first register with the Federal Emergency Management Agency to obtain a disaster number. The SBDCs stated that the applicants were confused by directions from SBA indicating that such registration was required. Based on our follow-up with SBA officials, SBA encourages business loan applicants to also register with the Federal Emergency Management Agency, but it is not required. However, the Partner Training Portal’s Disaster Assistance Video conveys an inconsistent message and seems to suggest that a disaster victim must first register with the Federal Emergency Management Agency before applying for a disaster loan. Unexpected requests for additional documentation. One of the three SBDCs told us of instances where applicants thought they had provided all the required documentation, but received subsequent requests from SBA for additional documentation. In a 2014 report on disaster assistance, we found in interviews with SBDCs and local business organizations that SBA’s follow-up requests for additional documentation prolonged the application process and loan decision. Furthermore, we observed that although the paper application form includes a list of additional information that may be requested, the Electronic Loan Application does not include a list of other information an applicant may have to provide SBA in addition to the required forms. Also, as of August 2016, the Electronic Loan Application did not contain two disaster business loan forms: (1) SBA Form 159D – Fee Disclosure and Compensation, and (2) SBA Form 1368 – Additional Filing Requirements Economic Injury Disaster Loan, and Military Reservist Economic Injury Disaster Loan. According to SBA officials, SBA includes links to these two forms in follow-up letters sent to disaster business loan applicants after they submit their loan applications. Lack of information about the reasons for required documents. According to responses from SBA’s survey of loan applicants provided by SBA, one survey respondent said that the loan process required too much information from applicants when SBA could access the same information elsewhere. The applicant cited that she should not have to locate copies of her tax return when SBA can use her signed tax transcript form to obtain tax information from the IRS. Further, according to one SBDC we interviewed, applicants it assisted did not understand why SBA requires applicants to submit both a current tax return and complete an IRS tax transcript authorization form. SBA officials said that tax transcripts do not provide all the information contained in a tax return. Therefore, they need information from transcripts and returns to make loan decisions. However, the reasons why are unclear to applicants because none of the available resources provide this explanation. Consistent with comments provided by SBDCs, the loan application process received the lowest satisfaction scores on the ACSI survey. According to the ACSI survey results for 2012 through 2015, the loan application process received business loan applicants’ and recipients’ lowest satisfaction scores of any SBA disaster loan program processes. The SBA Disaster Loan Program processes surveyed in the ACSI are application process, decision process, Customer Service Center, loan closing, inspection process, SBA staff, and recovery center. In response to questions about the application process, survey respondents were least satisfied with the “amount of paperwork required to complete the loan application” and “ease of attaining the information required to fill out the application.” Moreover, based on the survey results, ACSI recommended from 2012 to 2015 that SBA focus on improving the application process as a means of increasing business loan applicants’ and recipients’ satisfaction with the process. In addition, SBDCs reported and surveys found that applicants’ expectations about the time frames associated with the entire process were unmet and available resources do not consistently inform applicants about expected time frames. Unmet expectations about time frame to apply for and receive loans. Two SBDCs we interviewed told us that business applicants commonly complained about how long it took to go through the loan process and that it took too long to receive their loans. Two other SBDCs suggested that SBA could provide more information about the loan process to better manage applicants’ expectations. Specifically, one SBDC suggested SBA could provide more information about estimated time frames. Another SBDC said some of the applicants the SBDC assisted expected a faster loan process, and many business owners may start a loan application but never complete the application because they cannot spare time away from their business to collect all the required documents and to complete the loan application forms. In addition, according to responses from SBA’s survey of loan applicants provided by SBA, one survey respondent suggested that the business loan process should require fewer and shorter application forms. Another respondent said that the business loan process is too complicated and too time-consuming, and the respondent withdrew the loan application as a result. Information about time frames not included in all resources. The three-step process flier and the resource guide for businesses provide estimates of expected time frames for the processing and closing steps of the loan process. However, the Electronic Loan Application and some other resources do not provide an applicant with any estimated time frame of when the disaster business loan application will be processed. The Electronic Loan Application does provide the applicant with application status messages, such as “processing application.” See appendix V for application status messages and descriptions. According to SBA officials, SBA has updated its disaster forecasting model and planning documents that enable SBA to better estimate loan processing time frames based on the severity of a disaster, the volume of expected loan applications, and other factors. According to the ACSI survey results for 2012 through 2015, loan applicants and recipients surveyed consistently rated the loan decision process with the second-lowest satisfaction scores. In response to questions about the decision process, survey respondents were least satisfied with the “timeliness of the decision.” Based on the survey results, ACSI recommended from 2012-2015 that SBA focus on improving the loan decision process as a means of increasing applicants’ and recipients’ satisfaction with the process. The Paperwork Reduction Act and OMB state that agencies should explain the collection and use of personal information and promote transparency with the public. In particular, the Paperwork Reduction Act has a broad requirement that an agency explains to the person receiving an information collection the reasons for collecting the information and the agency’s use of the collected information. Furthermore, according to OMB’s directive on open government, transparency promotes accountability by providing the public with information about government activities. However, SBA’s paper and online resources do not provide consistent information about the three-step loan process, required documents and reasons for the requirements, and processing time frames, which could be contributing to applicants’ confusion. According to SBA officials, SBA’s customer service representatives working in disaster areas provide applicants information about the loan process, including explaining the three-step loan process and estimated time frames for completing the process. SBA officials also added that they refer business applicants to SBDCs for additional assistance in completing disaster loan applications. However, as previously mentioned, we found that SBDCs were not always well informed about information resources that explained the disaster loan process. Further, business applicants who apply without seeking assistance from SBA or SBDCs may see only SBA’s fact sheet, reference guide, loan-process flier, or application forms. Absent more consistent disaster loan-related information in each of the agency- produced paper and online resources, loan applicants and SBDCs may not understand the disaster loan process, documentation requirements, and time frames and may continue to find the loan process confusing. Some Business Loan Applicants Are Confused about Finance Terminology Our SBDC interviews indicate that some business loan applicants are confused about the finance terminology and financial forms required in the application, particularly the requirement that they submit a personal financial statement. According to three SBDCs we interviewed, they mentioned instances where business applicants had difficulty understanding the parts of the loan application dealing with financial statements and finance terminology; for example, there were applicants who were not familiar with financial statements, did not know how to access information contained in a financial statement, and did not know how to create a financial statement. Although the loan forms include instructions, the instructions do not define the financial terminology. According to SBA officials, when applicants do not know how to create a personal financial statement, the agency’s customer service representatives direct applicants to SBDCs for help. Two of the three SBDCs said these difficulties arise among business owners who do not have formal education or training in finance or related disciplines—and who are attempting applications during high-stress periods following disasters. Federal statute requires agencies to use clear and understandable terminology. Specifically, the Plain Writing Act of 2010 requires that federal agencies use plain writing in every document that they issue. Plain writing is defined as clear, well-organized writing that follows best practices appropriate for the intended audience. According to SBA officials, although the agency does not provide a glossary that defines finance terminology in loan application forms, the online application portal has a “contextual help” feature that incorporates information from application forms’ instructions to help applicants complete disaster loan forms. Additionally, as previously stated, SBA officials said that there are other resources, such as SBA customer service representatives and local SBDCs, available to assist business loan applicants and to explain the loan forms and key terms. As mentioned earlier, promoting disaster preparedness among businesses is one of the strategies in SBA’s 2015 performance report and actions SBA has taken include holding disaster preparedness webinars and conducting regional outreach. However, these efforts may not offer sufficient assistance or reach all applicants. Without explanations of finance terminology, loan applicants may not fully understand the disaster business loan application requirements, which may contribute to confusion in completing the financial forms. Conclusions Generally, lack of integration of resources into the disaster loan assistance web portal, inconsistent information in written and online resources, and undefined finance terminology on the loan application have contributed to the burden of businesses applying for disaster loans. In particular, disaster business loan applicants and resource partners may not be aware of key information for completing disaster business loan applications because key resource materials such as SBA’s fact sheet, reference guide, and three-step process flier are not easily accessible from the web portal. In addition, without consistent information about the loan process, explanation of documentation requirements, and expected time frames in SBA’s resource materials, loan applicants and resource partners may continue to find the loan process confusing. Further, without defining financial terminology in loan forms some applicants may not fully understand the requirements of the application. A more integrated, consistent, and clear dissemination of information by SBA would help business disaster victims better access information about the disaster loan process, better understand the loan document requirements and expected time frames, and better understand the definition of loan terminology, thus helping to reduce victims’ burdens in recovering from disasters. Recommendations for Executive Action We are recommending the following three actions: To help business disaster victims and resource partners better access information about the disaster loan process, the Administrator of the Small Business Administration should integrate information resources such as the fact sheet, reference guide, and three-step process flier into its disaster loan assistance web portal and Partner Training Portal in a way that is more accessible to users. To help reduce confusion about the disaster loan process and the time frames applicants may experience, the Administrator of the Small Business Administration should ensure the consistency of content across its disaster loan process resources by including in these written and online resources, as appropriate, the following: (1) the three-step process; (2) the types of documentation SBA may request and the general reasons why such information may be requested; and (3) estimates of loan processing time frames applicants might experience and external factors, such as the severity of a disaster, that may affect these time frames using, for example, estimates from its forecasting and related planning tools. To further assist disaster business loan applicants, the Administrator of the Small Business Administration should define technical terminology related to financial statements and other finance terminology on the disaster business loan application forms, in both electronic and paper format. For example, in the online application portal, SBA could incorporate a glossary in the “help” feature. Additionally, SBA could include a glossary in the paper application, so that business applicants who apply by mail can access the definitions as well as the general reasons why such information may be requested. Agency Comments We provided a draft of this report to SBA for review and comment. The SBA liaison—Program Manager, Office of Congressional and Legislative Affairs—stated in an e-mail that SBA’s Office of Disaster Assistance agreed with our recommendations. The SBA liaison also provided technical comments in an e-mail, which we incorporated where appropriate. We are sending copies of this report to appropriate committees and the Administrator of SBA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The Recovery Improvements for Small Entities After Disaster Act of 2015 includes a provision for us to evaluate the steps that the Small Business Administration (SBA) has taken to comply with the Paperwork Reduction Act of 1995 in administering its Disaster Loan Program. Specifically, this report examines (1) controls in SBA’s process for complying with the form renewal requirements of the Paperwork Reduction Act in administering its Disaster Loan Program, and (2) SBA’s recent and planned actions to reduce the burden of business loan applicants for the Disaster Loan Program. Although the processes we are evaluating apply to disaster loans for homeowners and businesses, this report focuses on disaster business loans for businesses. Additionally, SBA has divided the disaster loan process into three steps: application, verification and loan processing, and closing (see fig. 1). This report focuses on step 1, the loan application process. To examine the extent to which SBA has processes to implement and monitor compliance with the Paperwork Reduction Act’s requirements in administering its Disaster Loan Program and the effectiveness of the processes and controls in ensuring the requirements are met, we reviewed the act and Office of Management and Budget (OMB) regulations to identify relevant requirements. We also reviewed SBA’s policies and procedures, including the Forms Management Standard Operating Procedures, to identify its processes for meeting Paperwork Reduction Act requirements, and we assessed the processes against the act and applicable federal internal controls. We also interviewed SBA officials to understand SBA’s compliance with the act’s requirements and the effectiveness of SBA’s controls. Additionally, we reviewed each disaster business loan form to determine if SBA’s disaster business loan application forms satisfied the Paperwork Reduction Act requirements of having an OMB control number; valid expiration date; an estimate of how long it will take to complete the form; and a statement notifying applicants that if a form does not display a valid OMB control number then applicants do not have to complete that form. Specifically, we reviewed SBA Form 5 – Disaster Business Loan Application; SBA Form 159D – Fee Disclosure Form and Compensation Agreement; SBA Form 1368 – Additional Filing Requirements Economic Injury Disaster Loan, and Military Reservist Economic Injury Disaster Loan; and SBA Form 413D – Personal Financial Statement. For each form, we also reviewed the form renewal package—OMB Form 83-I; OMB Form 83-I supporting statement; and SBA Form 58—for SBA Form 5, Form 1368, and Form 413D, for years 2008, 2011, and 2014; and for SBA Form 159D, for years 2009, 2013, and 2014. For information on how an applicant would navigate the Electronic Loan Application portal to submit a disaster business loan application, we received an in-person demonstration of the Electronic Loan Application portal at SBA’s headquarters. To examine the extent to which SBA has developed plans or implemented actions to further reduce the paperwork burden of disaster business loan applicants, we reviewed SBA’s 2015 Performance Report and other SBA documentation that set out recent and planned actions intended to reduce burden or enhance loan processes for disaster business loan applicants. Moreover, we reviewed and identified leading practices intended to reduce paperwork burdens from the Hurricane Sandy Rebuilding Task Force Report (2013), an OMB memorandum to agency heads (June 2012), and the Small Business Paperwork Relief Task Force Reports (2003, 2004). We also interviewed Office of Disaster Assistance officials responsible for administering the Disaster Loan Program to discuss recent and planned actions to reduce the paperwork burden on disaster business loan applicants. In addition, we conducted semi-structured interviews with Small Business Development Centers (SBDC) to identify burdens faced by disaster loan applicants and suggestions to improve such issues. We selected a nongeneralizable sample of eight SBDCs to interview, based on which counties in each of the four Census regions had the highest number of approved disaster business loans for calendar years 2012 through April 1, 2016. Specifically, we associated each state within SBA’s 10 regions with one of the four Census regions—Northeast, Midwest, South, and West—which allowed us to have geographic diversity in the SBDCs we interviewed. Within each Census region, we identified two counties with the highest number of approved disaster business loans. In cases where more than one county tied for the highest number of approved disaster business loans, the county with the highest loan amount disbursed was selected. In instances where a county had the two highest number of loan approvals, the county with the third highest number of approved disaster business loans was selected. To select the eight SBDCs to interview, we used the city and zip code of the counties with the highest number of approved disaster business loans to identify the SBDCs located either within or nearby these counties. If a county with the highest number of approved disaster business loans did not have an SBDC located within it, we then selected the SBDC closest to the zip code receiving the highest number of disaster business loan approvals. If a county had multiple SBDCs located within it, we then looked at the zip code affected by the disaster in the county and selected the SBDC closest to the zip code receiving the highest number of disaster business loan approvals. Our selections do not represent the views of other SBDCs that were not included. We also reviewed both SBA’s and the American Customer Satisfaction Index’s (ACSI) customer satisfaction survey results. For analysis of SBA’s customer satisfaction survey results, we obtained survey suggestions submitted by disaster business loan applicants from June 2012 to March 2015. We selected this time period to be consistent with the time period used in the selection of SBDCs. The results comprised a list of 19 survey suggestions submitted by disaster business loan applicants and referred to the Continuous Improvement Process Board for review. For analysis of ACSI’s customer satisfaction survey results, we looked at ACSI’s 2012 through 2015 reports and identified the loan process areas that negatively and positively affected survey respondent’s satisfaction with SBA’s disaster business loan program and ACSI’s identified recommendations for improvements. We determined that these data were sufficiently reliable for the purpose of describing applicants’ experiences with the disaster business loan application process. We conducted this performance audit from February 2016 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Small Business Administration Disaster Business Loan Application Forms Appendix III: SBA and ACSI Customer Satisfaction Surveys Appendix IV: Small Business Administration Disaster Business Loan Resources Appendix V: Small Business Administration Electronic Loan Application Notification Messages Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jill Naamane (Assistant Director), Kun-Fang Lee (Analyst-in-Charge), Bethany Benitez, Tim Bober, William R. Chatlos, Camille Henley, Lindsay Maple, Marc Molino, and Tovah Rom made key contributions to this report.
According to SBA, the agency received more than 40,000 disaster business loan applications from fiscal years 2010 through 2014, and estimates that applicants spent on average a total of more than 25,000 hours per year filling out disaster loan application forms. PRA requires agencies to minimize paperwork burden on individuals and small businesses. The Recovery Improvements for Small Entities After Disaster Act of 2015 includes a provision for GAO to evaluate SBA's compliance with PRA. This report examines (1) controls in SBA's process for complying with PRA form renewal requirements in administering its disaster business loans, and (2) SBA's recent and planned actions to reduce burden for business loan applicants. GAO analyzed applicable laws and guidance, including PRA and OMB and SBA guidance and policies, relevant reports, and loan applicants' responses to SBA and other surveys. GAO also interviewed SBA officials and a nongeneralizable sample of eight SBA resource partners (Small Business Development Centers) that provided disaster-related assistance to businesses, based on county-level loan approvals for 2012 through April 1, 2016. The Small Business Administration (SBA) process for complying with the Paperwork Reduction Act (PRA) includes a number of controls to help disaster business loan forms comply with the act and Office of Management and Budget (OMB) requirements (see figure). For example, SBA has a standard operating procedure that documents its clearance process; a requirement to solicit public comments; and a requirement that offices of Disaster Assistance, General Counsel, and Inspector General review submission packages for PRA clearance. SBA surveys business loan applicants to solicit suggestions for improving the loan process. The disaster business loan forms also include a valid OMB control number, as required by PRA. SBA has implemented and planned actions to streamline the disaster business loan process, but the agency has not made loan-related information and requirements easily accessible or consistent, or defined key terms, contributing to applicants' burden. SBA's 2015 Performance Report set out the agency's recent and planned actions, including streamlining the loan process and enhancing online loan application capabilities. SBA has published written and electronic materials about the disaster loan process, but applicants cannot easily access these materials from SBA's dedicated disaster loan web portal, contrary to federal guidelines for improving digital services. Also, SBA's materials provide inconsistent information on the process, required documents, and estimated processing time frames. Business loan applicants reported that they found the documentation requirements confusing and the application time frames unclear. PRA and an OMB directive on open government generally state that agencies should explain the collection and use of information and promote transparency by providing the public with information about government activities. Similarly, some Small Business Development Centers told GAO that loan applicants have expressed confusion over undefined financial terminology in SBA's loan application, particularly terminology in the required personal financial statement. Federal law requires agencies' forms be written using plain language that is appropriate for the intended audience. Improved integration of electronic resources and consistency of information in SBA's materials would help business disaster victims better access resources and understand the disaster loan process and expected time frames. Further, providing definitions of loan terminology can help reduce victims' confusion.
Background To help meet its goal of replacing a portion of the conventional fuel used by light-duty vehicles in the United States with replacement fuels, EPACT established mandates, to be implemented by the Secretary of Energy, that require certain fleet operators to include alternative-fueled vehicles (AFV) in their fleets. Specifically, EPACT required that federal fleets acquire AFVs beginning in fiscal year 1993 and that state fleets and alternative fuel providers acquire AFVs beginning in model-year 1996. The federal AFV fleet program went into effect in 1993, but the mandates for state and alternative fuel provider fleets were delayed until 1997 because, according to a DOE official, the Department did not issue the rulemaking, as required by EPACT, early enough for the mandates to take effect in 1996. Also, under EPACT, the Secretary of Energy may require municipal and private (business) fleets to purchase an increasing percentage of AFVs to help meet the fuel-replacement goal. Under EPACT, DOE published an advance notice of proposed rulemaking in April 1998 and held public hearings in May and June 1998 to determine whether the establishment of the municipal and private fleet mandate is necessary and whether such a mandate will help attain EPACT’s fuel-replacement goal. EPACT does not require that the goal be achieved and authorizes the Secretary of Energy to modify the goal or the target years if he or she determines that they are not achievable. Table 1 presents a summary of EPACT’s AFV acquisition mandates for the fleets covered by the act. EPACT’s Replacement Goal Is Not Likely to Be Achieved The goal of EPACT to replace 10 percent of the conventional fuel consumed by light-duty vehicles by 2000 and 30 percent by 2010 with replacement fuels is unlikely to be achieved. On the basis of EIA’s modeling analysis, we estimate that alternative fuels will account for less than 1 percent of the total fuel to be consumed by light-duty vehicles in 2000 and about 3.4 percent in 2010, even after accounting for EPACT’s provisions mandating that fleets acquire AFVs. Previous studies by DOE have also concluded that EPACT’s goal is unlikely to be achieved after implementing the fleet acquisition mandates. Appendix I summarizes the results of several previous studies by DOE and Oak Ridge National Laboratory. Industry and DOE officials we talked with gave several reasons why the consumption of alternative fuels by light-duty vehicles will fall short of EPACT’s replacement goal. First, EPACT mandates that fleets acquire AFVs but does not explicitly require that those vehicles use alternative fuels.Consequently, according to industry officials, some fleets meet their AFV requirements by purchasing vehicles capable of using both gasoline and an alternative fuel (called dual-fueled vehicles), but these vehicles are usually run on gasoline. Moreover, both DOE and industry officials believe that achieving EPACT’s goal will require greater use of alternative fuels by vehicles beyond those in the fleets covered by the act, a development they believe is unlikely. For one thing, the high price of AFVs discourages their use. For example, according to one industry official, converting a conventional vehicle to run on propane can cost over $3,500, while a manufacturer’s AFV that runs on propane can cost about $6,000 more than a conventional gasoline-powered vehicle. In addition, the lower price of gasoline discourages increased use of the higher priced alternative fuels. Both DOE and industry officials said that the price of gasoline is simply too low for the transportation sector to purchase significant quantities of alternative fuels. (See fig. 1 for a comparison of gasoline and propane prices, adjusted for inflation.) Finally, the infrastructure needed to keep AFVs refueled is currently inadequate to support the wide-scale use of AFVs that are not operated as centrally fueled fleet vehicles. Industry officials told us that the consumption of alternative fuels for transportation is too small to justify any large-scale investment in this infrastructure. EPACT Will Lead to a Small Increase in the Use of Propane as a Transportation Fuel EPACT is expected to lead to a small increase in the use of propane as a transportation fuel. After EPACT’s AFV acquisition requirements are accounted for, the resulting increase in transportation use will represent only 0.4 percent of the total consumption of propane in 2000 and 3.2 percent in 2010. As a result, consumption of propane as a transportation fuel will account for about 1.5 percent of the total propane used in the United States in 2000 and about 5.1 percent in 2010. The effects of EPACT specifically on the consumption of propane fuel by light-duty vehicles are summarized in table 2. Some industry and DOE officials told us that although propane has certain attributes that could make it the alternative fuel of choice, such as permitting a longer driving range than compressed natural gas, the barriers cited previously—the high relative cost of AFVs, the low price of gasoline, and the inadequate infrastructure for refueling—still inhibit its use. In addition, it does not appear that the propane industry will strongly promote the fuel as an alternative vehicle fuel. Propane industry officials and others told us that the industry lacks the internal cohesion necessary to promote the use of propane as a transportation fuel. Some officials pointed out that the history of the industry has been one of small-scale suppliers that primarily serve the heating and other household needs of residential customers. These suppliers do not necessarily want to see propane become a major transportation fuel for fear that that would erode their business. By comparison, DOE and industry officials and other experts told us that the natural gas industry is aggressively promoting compressed natural gas as an alternative transportation fuel. A manufacturer of AFVs also told us that the natural gas industry has been much more aggressive in pushing for the increased manufacture of vehicles that run on its fuel than the propane industry has. EPACT Will Have Very Little Impact on the Supply and Price of Propane Because EPACT is not expected to cause any significant increase in propane demand, it will have very little impact on the supply and price of propane. Any effects of EPACT on the supply and price of propane are indirect, that is, are in response to any higher demand its mandates might cause. Because EPACT will not cause much change in the demand for propane, little change in the supply and price of propane can be expected to result from EPACT’s mandates. We estimate that the additional U.S. production of propane that will result from EPACT will be about 7,000 barrels per day in 2000 and 85,000 barrels per day in 2010. According to EIA’s modeling results, these levels of domestic production will satisfy most of the additional propane demand or consumption caused by EPACT, while imports will satisfy the rest. Some propane users have expressed concern that EPACT could cause imports to become a greater proportion of the U.S. propane supply. We found evidence that U.S. propane imports will increase but not because of EPACT. For instance, Purvin and Gertz, Inc., a consulting firm specializing in analyses of the oil and gas industry, projects that U.S. imports of propane will rise from about 179,985 barrels per day in 1997 to about 264,400 barrels in 2000 and almost triple by 2010 to about 483,200 barrels (see fig. 2). As a percentage of the total U.S. propane supply, imports will rise from about 14.5 percent in 1997 to about 19 percent in 2000 and 28 percent in 2010, according to the firm’s estimates. However, Purvin and Gertz officials told us that these increases in imports are unrelated to EPACT. The firm’s projections of both supply and demand for propane do not account for EPACT because it believes its effects on the propane market will be inconsequential. We estimate, based on EIA’s modeling, that the effect of EPACT on the overall price of propane will be negligible: an increase of 0.17 cent per gallon in 2000 and 3.28 cents per gallon in 2010 (fig. 3 shows the average price of propane, in 1996 dollars per million Btu, with and without EPACT’s effects, from 1997 through 2010). Table 3 presents the estimated price impacts of EPACT on three categories of U.S. consumers. The estimated price increase effects for residential consumers will be only 0.10 cent per gallon in 2000 and 1.50 cents per gallon in 2010; the increase for industrial consumers is estimated to be 0.10 cent per gallon in 2000 and 1.70 cents per gallon in 2010; and the increase for transportation consumers is estimated at 0.43 cent per gallon in 2000 and 2.33 cents in 2010. In addition, Purvin and Gertz officials believe that the U.S. market will become the destination for a large share of the increased propane production from natural gas fields being discovered in many parts of the world, which could lead to lower prices. EPACT Will Have No Discernible Adverse Effect on Existing Propane Consumers Existing consumers of propane, such as residential and petrochemical users, are not expected to be adversely affected by EPACT because the act is expected to have a negligible effect on propane prices. EPACT would cause a measurable adverse impact on existing propane consumers if it significantly increased the demand for propane as a transportation fuel and drove up its price. But, as previously discussed, demand and price are not likely to rise significantly given the limited effects of EPACT’s mandates for fleets. DOE, EIA, and most of the industry officials and experts we talked with believe that the price effects of EPACT will be negligible and that propane consumption by nontransportation sectors will not be affected by EPACT. Figure 4 presents projections, based on EIA’s modeling, for the consumption of propane by various U.S. end-use sectors, including the transportation sector, after factoring in the impact of EPACT. These projections are in contrast to those of a previous DOE study that investigated what might happen if alternative fuels, AFVs, and a refueling infrastructure were available on a widespread basis, a hypothetical scenario different from the questions addressed in our report. Appendix I provides additional detail on that study, which indicated a significant impact on the petrochemical industry if EPACT’s replacement goal was met in 2010. According to industry officials and experts, the petrochemical sector is likely to reduce its propane consumption if the price of propane rises because a significant portion of the propane that sector uses can be replaced with other feedstocks, such as naphtha and ethane. However, these officials also told us that switching feedstocks would also lead to increases in the prices of the substitutes, resulting in an increase in the industrial consumers’ production costs. These industry officials believe that if EPACT’s replacement goal is met, the likely consequences on their industry will be severe. Agency Comments and Our Evaluation We provided a draft of this report to DOE for review and comment. DOE agreed with our findings and provided some technical clarifications where appropriate. Scope and Methodology To determine whether and how including propane as an alternative fuel under EPACT will affect the supply, price, and existing consumers of propane, we asked EIA to use its National Energy Modeling System to estimate the likelihood of achieving EPACT’s fuel replacement goal and to estimate the potential impact. (See app. II for more explanation of the modeling of EPACT’s effects.) We asked EIA to include in the analysis all of the mandates in EPACT that require federal and state governments as well as fuel providers to procure AFVs for their fleets, including the mandates for private and municipal fleets that have not gone into effect. We also interviewed officials of the propane and oil industries, manufacturers of AFVs, companies that convert conventional vehicles to AFVs, the alternative fuels infrastructure industry, and relevant DOE and EIA officials for their perspectives on likely effects of EPACT. We also reviewed the EPACT documents as well as studies by DOE and others that deal with EPACT and alternative fuels. We conducted our review from February through September 1998 in accordance with generally accepted government auditing standards. We will send copies of this report to interested congressional committees and the Secretary of Energy. We will also make copies available to others upon request. Please call me at (202) 512-3841 if you have any questions. Major contributors to this report are listed in appendix III. Results From Previous DOE Studies The Department of Energy (DOE) has conducted several studies of the Energy Policy Act of 1992 (EPACT). The scope of these studies is broader than the objectives of our report in that these studies went beyond the mandated fleet measures analyzed in our report. For instance, these studies estimated impacts of actually reaching the EPACT replacement goal, as well as impacts of other policy initiatives. This appendix summarizes some of the information three of these studies provide on the barriers to the widespread use of alternative fuels and on implications for the future use of alternative fuels. Replacement of Gasoline With Alternative Fuels Is Likely to Fall Short of EPACT’s Goal “Major transitional impediments” will have to be overcome to reach EPACT’s goal of replacing 10 percent of the conventional fuel consumed by light-duty vehicles with alternative fuels by 2000 and replacing 30 percent by 2010, according to a 1997 DOE study. To meet the 2000 goal, 35 to 40 percent of total 1999 sales of new light-duty vehicles would have to be alternative-fueled vehicles, according to that study. To meet the 2010 goal, sales of alternative-fueled vehicles would have to stay in the range of 30 to 38 percent of all new light-duty vehicles sold. A 1998 draft report by DOE’s Oak Ridge National Laboratory, however, found that with the implementation of EPACT’s fleet requirements, including private fleet mandates, alternative-fueled vehicles would make up less than 1 percent of new vehicle sales in 2000 and only 4 percent by 2010. This study concluded it was unlikely that the 2000 goal will be met, or that the 2010 goal would be met without significant new policy initiatives. The study described the following transitional barriers to the greater use of alternative fuels: the lack of scale economies in the production of alternative fuels and alternative-fueled vehicles, the consumer costs of the low retail availability of alternative fuels and the limited model diversity of alternative-fueled vehicles, and the slow turnover of durable capital equipment and vehicles already on the road. Table I.1 summarizes the Oak Ridge study’s estimates of what portion of petroleum consumption would be replaced by alternative fuels in 2010 under six different sets of assumptions. In the base case, DOE assumed existing fleet mandates, while in the late rule case DOE assumed a local government and private fleet mandate was added. In either of those cases, alternative fuels would constitute less than 1 percent of light-duty vehicle motor fuel sales in 2010. The assumptions made in the next three cases produced estimated replacement levels between 14 and 22 percent. In the no barriers case, DOE assumed that vehicles and fuels would be produced at large-scale costs and that all fuels would be widely available at retail locations. In the greenhouse gas case, DOE assumed fuel tax reductions in proportion to the reductions in greenhouse gas emissions from a baseline level of emissions for gasoline. In the tax credit case, DOE assumed an ethanol tax credit would continue through 2010. The last of the six cases was the only one in which EPACT’s goal of 30-percent replacement was achieved. In this case, called the fuel sales mandate case, DOE assumed that the Congress would require that retail sales of alternative fuel meet EPACT’s goal but does not describe how such a result would be mandated. The Relative Contribution of Propane in Meeting EPACT’s Goal Is Unclear Although the Oak Ridge study suggests only a minimal role for propane in meeting EPACT’s goal, in January 1996, DOE issued a report on the feasibility of producing sufficient replacement fuels to meet EPACT’s 10- and 30-percent goal. This study indicated a potentially major role for propane. In it, DOE examined two scenarios: the low oil price scenario, which assumed that the Organization of Petroleum Exporting Countries (OPEC) was not able to exert monopoly control over crude-oil pricing in 2010 and the reference oil price scenario, which assumed that OPEC exerted partial monopoly power over the pricing of crude oil. Under the low oil price scenario, the world oil price would be $20.60 per barrel; the U.S. price, $21.60. Under the reference oil price scenario, the world oil price would be $25.82 per barrel; the U.S. price, $26.74 per barrel. For each scenario, DOE examined various possible cases, including the following four. In the benchmark case, DOE assumed that all fuels would be taxed at the same dollar-per-Btu rate, specifically the gasoline tax rate in 1994. It assumed that no well-developed infrastructure for alternative transportation fuels would exist and that alternative-fueled vehicles would be in use by organizations covered by EPACT’s fleet requirements and state mandates, while households would continue to rely on gasoline-fueled vehicles. In the unconstrained case, DOE assumed that a well-developed infrastructure for alternative transportation fuels and vehicles would exist in a long-run situation. In the limited imports case, DOE assumed that at least one-half of alternative fuels used would be produced from within the North American Free Trade Agreement countries. In the letter-of-the-law case, DOE not only assumed limited imports but also assumed that overall petroleum replacement would equal 30 percent. Table I.2 summarizes the results from this study for light-duty vehicle fuel use under the low oil price scenario. Somewhat higher replacement percentages occurred under the unconstrained and limited imports cases in the reference oil price scenario. Propane consumption was higher for the unconstrained and limited imports cases but slightly lower for the letter-of-the-law case under the reference oil price scenario. The importance of propane as an alternative fuel differed in the 1996 study and the Oak Ridge study because of the 1996 study’s use of lower cost figures for liquified petroleum gas. According to the authors of the Oak Ridge study, had they also used these lower costs, they would have reported a 28-percent displacement of gasoline by alternative fuels in 2010 versus the 14-percent figure they reported in the no barriers case. Of this 28-percent displacement, propane would have constituted about half. The 1996 study estimated that propane would account for 47 percent of the fuel displaced by alternative fuels. In its 1996 study, DOE found, under its low oil price scenario, potentially significant impacts on the propane market once all transitional barriers to alternative fuels were overcome. As seen in table I.3, when comparing the benchmark and unconstrained cases, liquified petroleum gas consumption by motor vehicles rises from 0.042 million to 1.450 million barrels per day in 2010. At the same time, consumption by the petrochemical industry falls from 0.333 million barrels per day to 0. The remaining liquified petroleum gas consumption, categorized as “nonvehicle end use” (such as residential heating and cooking), decreases from 1.742 million to 1.710 million barrels per day. Under the benchmark case, propane supplied by refineries and gas processing plants were somewhat lower in the reference oil price scenario than the low oil price scenario, whereas non-Canadian imports were higher under the reference oil price scenario. In both the reference and low oil price scenarios, the direction of change in values between the benchmark and the other cases were similar. Table I.4 summarizes the effect of these changes in consumption and supply on propane prices as estimated in the 1996 report. By contrast, the Oak Ridge study reported no increase in propane prices, except in the case of a fuel sales mandate, under which the liquified petroleum gas price rose less than 2 percent. Reformulated gasoline contains additional oxygen and burns more cleanly than conventional gasoline. In comparing the overall results of these two studies, the Oak Ridge authors noted that their results were “in marked contrast to DOE’s 1996 long-run analysis, which concluded that if the necessary infrastructure for a mature alternative fuel and vehicle industry were present, then alternative fuels, as a group, appear likely to sustain a 30-percent market share under equilibrium conditions.” The Oak Ridge report went on to state, “However, the modeling results here suggest that the necessary infrastructure may not evolve smoothly, and fuel and vehicle prices may not benefit from economies of scale in the absence of additional policies . . ..” Estimating the Effect of EPACT’s Fleet Mandates This appendix describes the Energy Information Administration’s (EIA) methodology for estimating the effect of EPACT’s purchase mandates for alternative-fueled vehicles (AFV) on the replacement of petroleum fuels used by motor vehicles in the United States and on the demand, supply, price, and existing consumers of propane. As a starting point, we asked EIA to use its National Energy Modeling System (NEMS) to estimate the likelihood of achieving EPACT’s goals of displacing 10 percent of the petroleum motor fuel used in the United States by 2000 and 30 percent by 2010 by implementing all the mandates placed on “covered” fleets (i.e., those bound by the EPACT mandates) to purchase an increasing percentage of AFVs. We also asked EIA to use NEMS to model the effect of including propane as an alternative motor fuel under EPACT on the demand, supply, price, and existing users of propane. EIA developed and maintains NEMS to forecast the effects of energy policies or programs and changing world energy market conditions on the U.S. and world energy markets. Estimating EPACT’s Effect on Fuel Displacement To estimate what percentage of petroleum motor fuel will be replaced by alternative fuels in the United States, EIA used the Transportation Demand Module of NEMS to model the effect of AFV acquisitions by the various covered fleets on the consumption of alternative fuels and calculated the percentage of displaced conventional fuel that this represents. EIA assumed that covered fleets will basically meet their legislative requirements for AFVs by purchasing the minimum percentage mandated by law. According to EIA officials, its analysis has found that the economics of AFV purchases do not justify exceeding the minimum percentage. It therefore estimated the impact of the fleet mandates by including in NEMS the minimum percentage required for each fleet category in each year specified. EIA used the reference case in its Annual Energy Outlook 1998 as a projection of the most likely future trends in energy markets and the U.S. economy. To estimate the impact of EPACT’s AFV mandates on the demand, supply, and price as well as on existing consumers of propane, EIA performed a special run of NEMS (otherwise known as the GAO case) that basically removed propane-fueled vehicles from consideration, leaving everything else constant, including the assumption that the minimum required percentage of AFV purchases would be made in each fleet category. For each variable of interest, the results of the GAO case were then compared with the reference case in the Annual Energy Outlook 1998 and the difference calculated to determine the estimated impact. For example, to estimate the impact of EPACT mandates on propane use, the difference in propane use between the GAO case and the reference case was computed. Detailed Assumptions and Other EPACT Provisions Incorporated in the Model In using NEMS to model the effects of EPACT, EIA made the following assumptions concerning how the provisions of EPACT are incorporated in the model. The fleet AFV purchases necessary to meet the EPACT regulations were derived based on the mandates as they currently exist as well as the Commercial Fleet Vehicle Module calculations. The federal AFV program went into effect in fiscal year 1993 but, generally, the mandates for state and alternative fuel provider fleets were delayed until 1997 because, according to a DOE official, DOE did not issue the rulemaking, as required by EPACT, early enough for the mandate to take effect in 1996. Specifically, it is assumed that each fleet category will meet its AFV mandate by purchasing the minimum percentage of AFVs required by EPACT for that fleet category. Table II.1 presents the percentages used by EIA’s NEMS for AFV purchases for each fleet category and model year, and the mandates for municipal and private fleets still subject to rulemaking. Table II.2 presents the total projected AFV purchases by fleets. Although the mandates for private and municipal fleets (covered by section 507(g) of EPACT) are not in effect yet and are still subject to rulemaking by the Secretary of Energy, the effects of these mandates were included in the model in order to estimate the total effect of all the AFV acquisition provisions in the law. In the model, the private and municipal fleet mandates do not become effective until model year 2002, based on the schedule specified in the law. Only fleets of 50 or more were considered (in accordance with EPACT), and AFV purchases were categorized as cars or light trucks. Because EPACT covers only fleets in the metropolitan statistical areas with 1980 populations of more than 250,000, the model excluded 10 percent of all business and utility fleets and 37 percent of all government fleets. For other than federal fleets, EPACT covers fleets of 50 or more vehicles, of which at least 20 vehicles can be centrally fueled and are used primarily in metropolitan statistical areas with 1980 populations of more than 250,000. Major Contributors to This Report Resources, Community, and Economic Development Division Seattle Field Office Araceli Contreras Hutsell, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO determined whether and how including propane as an alternative fuel under the Energy Policy Act of 1992 (EPACT) will affect existing propane consumers as well as the supply and price, focusing on: (1) whether EPACT's goal of replacing at least 10 percent of the conventional fuel used in light-duty vehicles by 2000 and at least 30 percent by 2010 with replacement fuels will be achieved; (2) the extent to which the use of propane as a motor fuel will increase as a result of EPACT; (3) the impact the use of propane as a motor fuel under EPACT will have on the supply and price of propane; and (4) the impact the use of propane as a motor fuel under EPACT will have on existing users of propane. GAO noted that: (1) it is unlikely that the goal of EPACT of replacing at least 10 percent of the conventional fuel used by light-duty vehicles in the United States by 2000 and at least 30 percent by 2010 with replacement fuels will be achieved; (2) GAO estimates, based on the Energy Information Administration's (EIA) modeling, that alternative fuels will account for less than 1 percent in 2000 and about 3.4 percent in 2010 of the total motor fuel projected to be consumed by light-duty vehicles; (3) the act's focus on the acquisition of alternative-fueled vehicles rather than on the use of alternative fuels, high alternative-fueled vehicle costs, low gasoline prices, and an inadequate refueling infrastructure for these vehicles are factors hindering the increased use of alternative fuels for transportation; (4) EPACT can be expected to lead to a small increase in the use of propane as an alternative fuel in the transportation sector; (5) GAO estimates that after the vehicle acquisition mandates in the act are factored in, consumption of propane as a motor fuel will account for about 1.5 percent of the total propane used in the United States in 2000 and about 5.1 percent in 2010; (6) the effects of EPACT on the supply and price of propane will be minimal; (7) incremental domestic production of propane as a result of the act will be about 7,000 barrels per day in 2000 and 85,000 in 2010; (8) according to the EIA, these levels of domestic production will satisfy most of the estimated additional propane demand caused by the act; (9) GAO estimates that the increase in the overall price of propane attributable to the act will be a negligible 0.17 cent per gallon in 2000 and 3.28 cents per gallon in 2010; (10) similarly, EPACT will have little impact on the existing consumers of propane because the price increases will be so small; (11) GAO estimates that propane prices paid by residential and industrial consumers will increase by an average of just 0.10 cent per gallon in 2000, while the prices paid by transportation consumers will increase by about 0.43 cent per gallon; and (12) GAO projects that in 2010, price increases due to the act will be, on average, 1.50 cents per gallon for the residential sector, 1.70 cents per gallon for the industrial sector, and 2.33 cents per gallon for the transportation sector.
Background Records are the foundation of open government, supporting the principles of transparency, participation, and collaboration. Well-managed records can be used to assess the impact of programs, improve business processes, and share knowledge across the government. Effective records management is also an important tool for efficient government operations. Without adequate and readily accessible documentation, agencies may not have access to important operational information needed to make decisions and carry out their missions. Directive Established Federal Records Management Requirements In response to the November 2011 presidential memorandum to begin an executive branch-wide effort to reform records management policies and develop a framework for the management of electronic government records, the Director of OMB and the Archivist of the United States jointly issued a directive to heads of federal departments and agencies. The directive was aimed at creating a robust records management framework for electronic records that complies with statutes and regulations to achieve the benefits outlined in the presidential memorandum. It required agencies, to the fullest extent possible, to eliminate paper and use electronic recordkeeping. In particular, the directive set forth two goals that federal agencies, including OMB and NARA, are to work toward: require electronic recordkeeping to ensure transparency, efficiency, and accountability; and demonstrate compliance with federal records management statutes and regulations. To meet the two goals, the directive identified 10 requirements that agencies had to address by established deadlines. As shown in table 1, seven of the requirements had deadlines that ranged from November 15, 2012, to December 31, 2014. The directive also required NARA, OMB, and OPM to take 13 actions to assist agencies with meeting goal 2 of the directive. Table 2 describes the required actions and their due dates. Agencies Took Actions in Response to the Directive but Did Not Fully Meet All Requirements The 24 federal agencies took actions toward implementing each of the seven requirements due in November 2012, December 2013, and December 2014. These actions included designating and reaffirming senior agency officials at the appropriate level to oversee agencies’ records management programs, developing and implementing plans to manage permanent electronic records, reporting progress in managing permanent and temporary e-mail in an electronic format, identifying 30- year or older permanent records for transfer, identifying unscheduled records, obtaining the NARA federal records management training certificate, and developing records management training. However, certain requirements were not fully met by 5 of the agencies because these agencies were either still working on addressing the requirement, or did not view the requirement as being mandatory. Until agencies fully implement the directive’s requirements, the federal government may be hindered in its efforts to improve performance and promote openness and accountability through the reform of records management. All Agencies Designated a Senior Agency Official to Oversee Records Management Activities, but Not All Designated the Official at the Appropriate Level According to the directive, by November 15, 2012, and every year thereafter, each agency is required to name or reaffirm the Senior Agency Official who is responsible for coordinating with the agency records officer and other appropriate officials to ensure the agency’s compliance with records management statutes and regulations. The Senior Agency Official should hold a position at the assistant secretary level or its equivalent. Further, according to the directive, this official should be empowered to make adjustments to agency practices, personnel, and funding, as may be necessary, to ensure compliance and support the business needs of the department or agency. All 24 agencies had designated a Senior Agency Official to oversee records management and had subsequently reaffirmed or named a new official. Among these agencies, 22 had designated a Senior Agency Official at the assistant secretary level or its equivalent and had given the official responsibilities for overseeing records management, including being empowered to make adjustments to agency practices, personnel, and funding, as required. Two agencies, OPM and the Department of Veterans Affairs, had not designated their officials at the appropriate level. Further, at the Department of Veterans Affairs, the official was not always reaffirmed in accordance with the directive. Additionally, the department had not assigned its Senior Agency Official the responsibilities for ensuring records management compliance in the manner called for in the directive. Specifically: Within OPM, the Senior Agency Official was not at the assistant secretary level or its equivalent. Rather, the position was delegated by the Chief Information Officer to the Chief of Records Management and Data Policy, within the Office of the Chief Information Officer. The Chief Information Officer did not view the Senior Agency Official designation at the assistant secretary level or its equivalent to be mandatory, and thus, did not assign the official at that level. Further, the Senior Agency Official said the position had the full responsibility, as stated in the directive, for ensuring that the agency’s records program complies with all records management statues and regulations. Nevertheless, while OPM’s Chief Information Officer did not consider the designation at the assistant secretary level or its equivalent to be mandatory, NARA records management officials stated that doing so is mandatory. At the Department of Veterans Affairs, the Senior Agency Official was named in 2012, and reaffirmed in 2013. However, the official was not reaffirmed in 2014, thus not adhering to the directive requirement to reaffirm the Senior Agency Official annually. According to records management officials, the department regarded the requirement as not applicable when the Senior Agency Official did not change, but subsequently reaffirmed the official in February 2015. These officials added that the current Senior Agency Official position is held by an Associate Deputy Assistant Secretary who is instrumental in making recommendations and follow-up justifications to ensure compliance with the directive. However, the officials acknowledged that the Senior Agency Official does not have the authority to make decisions about agency practices, personnel, and funding to ensure compliance. By not designating the Senior Agency Official at the level stated in the directive, OPM has not demonstrated its commitment to ensuring that the official it assigns to oversee compliance with records management statutes and regulations is consistent with the directive requirement. Further, by not designating a Senior Agency Official with the authority to make necessary decisions about agency practices, personnel, and funding critical to its electronic records management, the Department of Veterans Affairs has not demonstrated its priority to ensuring compliance with the directive in support of the department’s business needs. Agencies Have Taken Actions to Manage and Transfer Temporary and Permanent Electronic Records, but More Action Is Needed The managing government records directive established four requirements that agencies were to complete by December 31, 2013. Specifically, agencies were to develop and begin to implement plans to manage all permanent records in an electronic format; report to NARA annually the status of the agency’s progress in managing both permanent and temporary e-mail records in an electronic format; ensure that permanent records that have been in existence for more than 30 years are identified for transfer and reported to NARA; and coordinate with NARA to identify all unscheduled records, including records stored at NARA’s and the agencies’ records storage facilities that have not yet been properly scheduled. As shown in table 3, the majority of the 24 agencies took actions to implement the directive requirements. The directive required each agency to develop and begin to implement plans to manage all permanent records in an electronic format. To assist agencies in meeting this requirement, NARA developed a Senior Agency Official report template.agencies report on a number of specific areas, to include In using the template, NARA requested that details on how permanent electronic records are currently captured, retained, searched, and retrieved; plans to digitize permanent records currently in hard-copy format or plans to manage all permanent electronic records in electronic format, including how the plans will be implemented; and challenges the agency faced in achieving the requirement of managing all permanent electronic records in an electronic format. All but 1 of the 24 agencies described their efforts to address these areas in the Senior Agency Official reports that they submitted to NARA. For example, 1 agency stated that its permanent records were being captured in both electronic and paper format and that permanent records were retained in agency shared drives. Another agency stated that electronic records capabilities were rolled out to its components to capture, retain, search, and retrieve the agency’s permanent electronic records; while another stated that its components capture, retain, search, and retrieve permanent electronic records in a variety of ways depending on their unique missions, business processes, and available technologies. The National Science Foundation did not submit a Senior Agency Official report and did not provide information to NARA on how it intends to manage permanent records electronically. According to National Science Foundation records management officials, the agency is in the process of formalizing plans to manage permanent electronic records in an electronic format and intends to complete the plan in fiscal year 2015. However, the officials did not provide a date as to when the agency intends to report its plans to NARA, as required. Until the National Science Foundation completes and reports on its plans, it will not be positioned to provide NARA with required information on how it intends to manage permanent electronic records, or to receive feedback from NARA that could help ensure the effectiveness of its approach. Most Agencies Reported Progress in Managing Permanent and Temporary E- mail Records The directive required that each agency report to NARA, on an annual basis, regarding the status of its progress to manage both permanent and temporary e-mail records in an electronic format. Toward this end, 23 agencies met this reporting requirement. The agencies reported their progress through written responses in their Senior Agency Official reports. (Within the report template, a section was designated for the agency to describe its progress in managing both permanent and temporary e-mail records in an electronic format.) The 23 agencies’ written responses described how their e-mail records were currently captured, retained, searched, and retrieved, and how they identified temporary and permanent e-mail records. As previously discussed, the National Science Foundation did not submit a Senior Agency Official report to NARA. In this regard, the agency’s records management officials stated that the management of permanent and temporary e-mail records was reviewed internally in May 2014 and that they were looking into whether a current agency system could be used to convert records into useable record types. However, no date was given by the officials as to when the required review for permanent and temporary e-mail records would be completed; nor did the agency provide a date as to when it will report to NARA, as required. By not reporting on its progress toward managing permanent and temporary e-mail records in an electronic format, the National Science Foundation has not taken an important step toward ensuring that NARA is aware of the agency’s ability to retain e-mail records in an electronic system that supports records management and litigation requirements, including the capability to identify, retrieve, and retain the records for as long as they are needed. Further, the agency risks not receiving feedback from NARA that could help ensure it is prepared to retain e-mail records in an electronic system, as envisioned. Most Agencies Identified and Reported on the Transfer of Permanent Records to NARA The directive required agencies to ensure that permanent records that have been in existence for more than 30 years are identified for transfer and reported to NARA. In accordance with that requirement, the majority of the agencies identified for transfer and reported on their permanent records that were in existence for 30 years or more. Specifically, 21 of 24 agencies submitted to NARA, as part of their annual records management self-assessment reports, their lists of permanent records, or reported that there were no permanent 30-year-old records in their possession. One agency, the National Science Foundation, did not report to NARA on its possession of permanent 30-year-old records. Records management officials at the National Science Foundation stated that the agency did not meet the reporting requirement because it did not complete its process of validating the accuracy of records that it had identified as potentially being 30 years old or older until the reporting deadline had passed. According to these officials, the agency completed this process in December 2014 and determined that there were no 30-year-old or older records in existence within the agency. The officials stated that, because the agency had no such records in its possession, the agency did not view reporting to NARA as a requirement. However, reporting that it had no permanent records in existence for 30 years or more would be a practice that is consistent with the majority of the agencies’ efforts to inform NARA regarding the state of these records and would demonstrate the National Science Foundation’s adherence with the directive. Two other agencies—the General Services Administration and the Department of Transportation—had not fully addressed this requirement because they had not identified and reported on permanent 30-year-old records stored at either NARA’s federal records centers or the agency’s records storage facilities. According to General Services Administration records management officials, permanent records stored at NARA’s federal records centers were identified, but permanent records stored at agency records storage facilities had not been identified. The officials stated that the agency plans to finalize, and report to NARA on, the identification of these records as part of its next agency-wide records inventory, which is supposed to occur in the summer and fall of 2017. According to Department of Transportation records management officials, the department had met the requirement for all but 3 of its 10 components. In particular, the officials stated that 1 component had identified 30-year-old permanent records during its 2012 records inventory and, as of March 2015, was working with NARA to transfer these records by May 2015. The officials also stated that another component reported to NARA in January 2014 that it did not have permanent records that were in existence for more than 30 years. However, this component subsequently identified one 30-year-old permanent record in June 2014, and the department plans to report and transfer this record to NARA by the end of fiscal year 2015. The officials stated that the third remaining component had not completed its records inventory, but as of March 2015, had not identified any permanent 30-year-old records in its possession. Department of Transportation officials stated that they plan to report this information to NARA once their inventory is completed. By not finalizing its identification of records stored at the agency’s records storage facility until approximately 4 years beyond the date specified in the directive, the General Services Administration delays its ability to report the status of, and transfer to NARA, its records that have been in existence for 30 years or more. Similarly, until the Department of Transportation ensures that its component completes the identification of permanent 30-year old records in its possession, it also limits its ability to report this information to NARA, as required. Most Agencies Identified Unscheduled Records The directive required each agency to coordinate with NARA to identify all unscheduled records, including all records stored at NARA and at agencies’ records storage facilities that have not yet been properly scheduled. We previously found that this is an essential step since NARA considers unscheduled records an important indicator of the risk of unauthorized destruction of records. Among the 24 agencies, 20 had either identified unscheduled records and reported their progress in identifying these records to NARA, or had reported that they did not have any known unscheduled records by the reporting deadline. In particular, Senior Agency Officials and records officers for these agencies had either (1) worked in conjunction with NARA staff and identified their unscheduled records, (2) independently identified the unscheduled records, or (3) reported that there were no unscheduled records in their possession. Three agencies—the Departments of Commerce and Transportation and the General Services Administration—did not complete the identification of their unscheduled records by the reporting deadline, although they subsequently did so for all or most of their components. The Department of Commerce reported that it did not fully meet the requirement for identifying unscheduled records until the reporting deadline had passed. In particular, Commerce records management officials reported that the department completed the process of identifying the unscheduled records in September 2014. Department of Transportation records management officials stated that the department identified unscheduled records for 1 remaining component (out of 10) in December 2013 and reported to NARA on those unscheduled records in January 2014. According to General Services Administration records management officials, the agency did not identify unscheduled records stored at agency records storage facilities until November 2014, following an agency-wide records inventory in that same month. Lastly, the National Science Foundation had not completed its identification of, or reported on, any portion of its unscheduled records. In July 2014, agency records management officials noted that they had identified the unscheduled records, but a preliminary internal inspection of the records had revealed administrative errors. Subsequently, the officials stated that a review of the unscheduled records list was under way in September 2014. However, as of March 2015, the officials stated that the review was still ongoing and they could not provide a date for when it would be completed. By not completing the identification of unscheduled records, the National Science Foundation increases the risk that its records could be destroyed without NARA’s awareness and approval. Most Agencies Obtained Certifications and Established Records Management Training Programs The records management directive established two requirements that were to be completed by December 31, 2014. The first was that each agency’s designated agency records officer must hold the NARA certificate of Federal Records Management Training and that new records officers must acquire the certification within 1 year of assuming the position of agency records officer. The second requirement was that each agency was to establish its own method to inform all employees of the agency’s records management responsibilities and develop suitable records management training for appropriate staff. On December 4, 2013, NARA issued a bulletinagencies providing further guidance on agency records officer training requirements as stated in the directive. The requirement applied to all to the heads of federal formally appointed federal agency records officers. In the bulletin, NARA stated that it recognized that some designated agency records officers had years of experience and accreditation in the records management profession. In those cases, it agreed to grant the officer an exemption from obtaining the certificate of Federal Records Management Training. If an exemption was approved, no further action would be required to meet the directive training requirement. To receive an exemption from NARA, designated agency records officers must meet one of three criteria: (1) have 3 years of experience as a designated agency records officer and an Institute of Certified Records Managers certification, (2) have 3 years of experience as a designated agency records officer and an Academy of Certified Archivistsdesignated agency records officer at one or more federal agencies. certification, or (3) have 7 years of experience as a The majority of the federal agencies (22 of 24) either fully met the first requirement that each designated agency records officer hold the NARA certificate for Federal Records Management Training, were granted an exemption from obtaining the certificate, or were appointed in 2014 and have until a date in 2015 to complete the certification. Specifically, 17 agencies’ designated agency records officers had obtained the NARA certificate for federal records management; 4 agencies had received NARA exemptions for some or all of their designated agency records officers; and 1 agency had recently appointed its designated agency records officer, who has until September 2015 to complete the Federal Records Management Training. Among the remaining two agencies—the Departments of Commerce and Defense—at least 1 designated agency records officer had not obtained the NARA training certificate or been granted a NARA exemption by the required deadline. Each of the agencies’ officials stated that their records officers were in the process of completing classes or obtaining the exemption. Specifically, at the Department of Commerce, 12 of 17 designated agency records officers had obtained the required NARA training certificate or received an exemption. According to the department’s records management officials, the remaining 5 designated agency records officers plan to complete their training by the end of fiscal year 2015. For the Department of Defense, 23 of 24 designated agency records officers had obtained the required NARA training certificate or received an exemption. The department’s records management officials stated that the remaining designated agency records officer expects to complete training by August 2015. With regard to the second requirement, all 24 agencies had established a method to inform employees of their records management responsibilities, as outlined in federal laws and policies. Specifically, these agencies had established a method to inform employees of their records management responsibilities either through agency-wide policy, departmental regulation, or through an agency-wide e-mail. However, two agencies had not yet completed the development of their agency records management training. Specifically, the Departments of Commerce and Energy were in the process of developing training for their staff, and officials from these agencies said they plan to complete the training by June 2015. OPM, OMB, and NARA Took Actions to Oversee Agencies’ Implementation of the Directive, but Not All Specified Actions Were Completed OPM, OMB, and NARA had taken steps toward implementing the 11 of 13 actions specified in the directive as their responsibility, but selected requirements had not been fully addressed by the specified deadlines. For example, OPM had finalized an occupational series to elevate records management roles, responsibilities, and skill sets for agency records professionals. In addition, according to an official in OMB’s Office of Information and Regulatory Affairs, that agency was still in the process of updating its Circular A-130 to include records management requirements for agencies that are moving to cloud-based services or storage solutions, with the updated circular expected to be released by the end of calendar year 2015. Further, NARA had met with Senior Agency Officials and produced a plan to move agencies toward greater automation of records management. Moreover, NARA, in cooperation with the Federal Records Council, had worked with community of interest groups to identify tools that support electronic records management. However, it had not included metadata requirements in its guidance, as required. Until NARA completes the actions specified in the directive, agencies may not have the guidance needed to help improve the efficiency and effectiveness of records management across the federal government, as envisioned by the directive. OPM Established the Records and Information Management Occupational Series The managing government records directive required OPM to establish, by December 31, 2013, a formal records management occupational series. In doing so, OPM was to elevate records management roles, responsibilities, and skill sets for agency records officers and other records professionals. In response to the directive, OPM created a draft position classification document for the Records and Information Management Series, 0308, by the December 2013 deadline, but did not finalize it until March 2015. The document was created to establish the records management series and classify positions within this series. OPM disseminated the document to all federal agencies and obtained comments. Specifically, it issued a memorandum on December 27, 2013, to announce the release of the draft position classification document for the records management work. Federal agencies were asked to provide their comments by February 7, 2014. All 24 agencies, including NARA, provided comments to OPM regarding the records management occupational series. Among the comments, agencies suggested that OPM incorporate changes to the position series and title, update the series to include the duties of federal records managers at senior levels and the working relationship between records management staff and senior officials, such as the Senior Agency Official, to ensure that agencies have efficient and effective records management programs. Other comments suggested that personnel in this series should be considered as subject matter experts, and indicated that the draft series did not acknowledge the records management position as a full-time position with full-time responsibilities. The comments also stated that OPM should revise the series and consider that some positions would be a combination of records management, knowledge and information management, and information compliance roles, including data protection and freedom of information. According to the Records and Information Management Occupational Series document, OPM summarized what it described as “major” agency comments on the occupational information, occupational title, and grading criteria, along with OPM’s response to these comments, in an appendix of the final series document. According to the document, OPM revised the occupational information to include language that addressed the modernization of records management, electronic records, and training, among other things. OPM also changed the occupational title to “Records and Information Management Specialist,” based on agency comments that it received on the title. Additionally, OPM revised the grading criteria language to align with the language of other recently issued series. By establishing a formal records management occupational series in March 2015, OPM took steps to elevate the roles, responsibilities, and skill sets for agency records officers and other records professionals. OMB Has Not Yet Updated Its Policy for the Management of Federal Information Resources According to the directive, OMB is to include in its next revision of Circular A-130 provisions for federal agencies to incorporate records management requirements when moving to cloud-based services or storage solutions. The directive did not establish a date by which this was to be accomplished. As of March 2015, OMB had not finalized its revisions to Circular A-130 to require agencies to incorporate records management requirements when moving to cloud-based services or storage solutions, as specified in the directive. Officials in OMB’s Office of Information and Regulatory Affairs stated, however, that the requirement is expected to be included in the circular when it is finalized. In explaining the status of this initiative, an official stated that revisions to the circular began in 2012 and were distributed for interagency comments. Additional revisions to the circular continued in 2013 and, in 2014 the agency waited for the approval of legislation, such as the Federal Information Security Modernization Act of 2014, which requires OMB to amend or revise Circular A-130 to eliminate inefficient or wasteful According to OMB officials, the agency expects reporting within 1 year.to finalize and issue the revised OMB Circular A-130 in December 2015. If consistent with the directive, this planned action of revising Circular A- 130 by OMB should help agencies incorporate records management requirements when moving to cloud-based services or storage solutions. NARA Acted on Nine Specific Requirements, but Work Remains The directive included nine specific actions that NARA was to implement by the end of December 2012, December 2013, and December 2014. Many, but not all of NARA’s actions met the requirements of the directive. By December 31, 2012, the Archivist of the United States was required to convene the first of periodic meetings of all Senior Agency Officials to discuss progress on (1) implementation of the directive, (2) agency federal records management responsibilities, and (3) partnerships for improving records management in the federal government. Additionally, by this date, NARA was to complete a review of its records management reporting requirements and produce a template for a single annual report that each Senior Agency Official was to send to the Chief Records Officer for the U.S. Government beginning on October 1, 2013. Toward this end, in November 2012, the Archivist held the first meeting with Senior Agency Officials, agency records officers, and NARA staff. According to documentation we reviewed, meeting topics addressed two of the required areas: (1) an overview of the implementation of the directive and (2) Senior Agency Officials’ responsibilities and duties. Subsequent meetings were also held with the Senior Agency Officials of various agencies from August 2013 to August 2014 that addressed the implementation of an e-mail management approach and strategies for meeting the goals of the directive, among other topics. Nevertheless, NARA records management officials acknowledged that the Senior Agency Official meetings did not include a discussion of partnerships for improving records management in the federal government, as required by the directive. According to the officials, the agency considered these meetings to be information-sharing discussions that would facilitate the exploration for future partnerships. The officials added that NARA planned to contact Senior Agency Officials and agencies when it dedicates more resources to address these partnerships. Also in December 2012, NARA completed a review of its records management reporting requirements. Among other things, this review examined instances in which agencies were required to report on their progress through the managing government records directive, agencies’ records management self assessments, and inventories that the agencies were to conduct of electronic recordkeeping systems. The review concluded that federal agencies were required to submit information to NARA for its use in measuring the state of federal records management in 11 different instances. These submissions were in addition to other information, such as plans and improvements that the agencies were making in response to the managing government records directive. NARA officials stated that, as a result of the review, it took steps to streamline agencies’ reporting requirements and reduce the number of times that agencies submit information. For example, in 2013, it eliminated the requirement for federal agencies to report semi-annually on their electronic records inventories. Instead, in 2014, NARA began requiring agencies to report annually on their electronic records as part of their records management self-assessment submissions. Further, the officials stated that NARA had used the results of the review to create the required template that was to guide Senior Agency Officials in the development of their annual reports on the management of government records. NARA disseminated the template to federal agencies via an August 2013 memorandum. Subsequently, the agencies were to use the template to guide their descriptions of current and future plans to manage permanent electronic records, temporary and permanent e-mail records, and the use of cloud computing services. As discussed earlier, specific details of the template included questions about how agencies capture, retain, search, and digitize records. Also required were details on how the agencies intend to implement plans, as well as anticipated challenges to their management of permanent records electronically. According to the directive, agencies were to submit their reports based on the template by December 31, 2013. NARA officials stated that their assessment of information provided by agencies in the Senior Agency Official report template had disclosed that responses varied in length and detail. Specifically, some agencies provided brief generalizations while others provided an abundance of information. We also found variations in the extent of Senior Agency Official template responses and supporting information. For example, in addition to providing descriptions of its progress being made toward specific directive goals and requirements in the Senior Agency Official template, the Nuclear Regulatory Commission submitted a preliminary plan that contained, among other things, the objective, scope of work, schedule, project team members, and project costs needed to modernize its information and records management program. This degree of specificity was not common in other agency submissions. NARA records management officials acknowledged that the original reporting template had lacked specificity regarding the level of detail that it required agencies to provide. The officials stated that they did not require agencies to provide items, such as project plans, and did not intend to evaluate the report submissions for the sufficiency of agency plans. Rather, the officials indicated that NARA had wanted to identify what agencies were doing well, so those methods could be shared with other agencies that were in the initial stages of planning. Further, according to the officials, NARA wanted to encourage agencies to begin planning how they would meet the final December 2019 requirement to manage all permanent electronic records in an electronic format in advance of the deadline. Moreover, it wanted agencies to consider the sequential steps as well as timing and resources needed to move toward electronic recordkeeping. NARA records management officials recognized the need for more information and stated that the next version of the Senior Agency Official reporting template, based on the data collected in 2013, is expected to seek information to be used as metrics to show what progress is being made across the government toward meeting the directive’s goals. In particular, in September 2014 NARA revised the reporting template to collect information on, among other things, agency records officers’ efforts in obtaining the federal records management certificate, and best practices applied and lessons learned on each agency’s transition to electronic recordkeeping. According to the officials, NARA is committed to making the template instrumental to agencies, providing support to records management programs, and achieving the goals of the directive. By taking steps to ensure that agencies provide consistent and complete information regarding their efforts to manage permanent electronic records, NARA stands to have better awareness of agencies’ readiness to meet the established deadline. NARA Revised Transfer Guidance for Permanent Electronic Records, but Metadata Requirements Were Not Included According to the records management directive, NARA was to complete and make available by December 31, 2013, revised guidance, including metadata requirements for agency transfer of permanent electronic records to NARA. The revised guidance was to include additional sustainable formats commonly used to meet agency business needs. Also, NARA was to update the guidance regularly, as required, to stay current with technology changes. In January 2014, NARA revised its transfer guidance for permanent electronic records and made the document available to the public on its website in the form of NARA Bulletin 2014-04, Revised Format Guidance for the Transfer of Permanent Electronic Records.covered categories not addressed in the previous guidance, such as digital audio and moving images. Among its revisions, the bulletin applies to all electronic records that have been appraised and scheduled for permanent retention, specifies which file formats are acceptable when transferring permanent electronic records to NARA, identifies preferred and acceptable formats for each category of electronic file, and expands the number of suitable formats that NARA will accept for transfer, based on their sustainability. However, NARA’s revised guidance did not include metadata requirements, as called for in the directive. The bulletin stated that NARA would develop metadata requirements for electronic records separately, although no date was given for when it intends to do so. Further, as an alternative, NARA included in the bulletin a list of other currently available guidance for electronic records that address metadata. For example, the bulletin refers to guidance for electronic pointers (such as metadata tags) to establish linkages and the capture and maintenance of required metadata. The bulletin also specifies that agencies must comply with existing requirements for documentation and metadata as described in existing federal regulations until new requirements for metadata for electronic records are published. NARA records management officials stated that the previous guidance for transferring permanent electronic records had identified metadata for a few record types, including digital photographs, geospatial records, and e- mail records. However, the development of new guidance on metadata requirements will be the first time that NARA has specified individual elements of metadata for all permanent electronic records. According to the officials, NARA anticipates that agencies will use the revised metadata guidance when implementing automated technologies for records management, and to address the creation, management, and eventual transfer of permanent electronic records to NARA. Nevertheless, the officials acknowledge that NARA had not set a time frame for making the revised metadata guidance available to agencies. Until NARA establishes a time frame for and, accordingly, takes steps to include metadata requirements in its revised guidance, agencies will remain unaware of all of the information they need to provide when transferring electronic records to NARA. NARA Created New E-mail Guidance The directive required NARA to issue new e-mail guidance by December 31, 2013. The guidance was to describe methods for managing, disposing of, and transferring e-mail. Accordingly, in August 2013, the agency released NARA Bulletin 2013- 02, Guidance on a New Approach to Managing Email Records. The bulletin presented an e-mail management approach called Capstone. NARA records management officials described Capstone as an automated or manual method of categorizing and scheduling e-mail based on the work or positions of e-mail account owners. It is to be employed using various tools or systems and offers agencies a more simplified way to manage e-mail when compared to print and file systems or records management applications that require staff to file e-mail records individually. NARA records management officials anticipate that the Capstone approach will provide agencies with a feasible solution to e-mail records management challenges, especially as agencies consider cloud-based solutions. Further, according to these officials, the Capstone approach is expected to allow agencies to consider whether e-mails contain the required metadata elements at the time of transfer to NARA. For its part, NARA has supplemented Capstone with training materials and other related guidance and resources, and has made this information available to agencies on its website to assist them in evaluating or adapting Capstone features. By providing agencies the Capstone bulletin and related information representing a simplified automated e-mail management methodology, NARA has taken steps to assist agencies in incorporating recordkeeping requirements into their business processes and, in identifying the specific means by which they can fulfill their responsibilities under the Federal Records Act. NARA Developed a Plan to Move Agencies toward the Increased Automation of Records Management NARA was required to produce a comprehensive plan, in collaboration with the Federal Chief Information Officers Council, the Federal Records Council, private industry, and other stakeholders that describes suitable approaches for the automated management of e-mail, social media, and other types of digital record content, including advanced search techniques. The plan was to detail expected outcomes and potential associated risks and be completed by December 31, 2013. Although not completed by the required deadline, NARA finalized and released a plan in September 2014 that was developed in consultation with the Federal Chief Information Officers Council, the Federal Records Council, private industry, and other stakeholders. The plan identified approaches for federal agencies to pursue when automating electronic records management, to include automated management of e-mail, social media, and other types of digital record content, as well as advanced search techniques. The plan discussed the outcomes, benefits, and risks of these approaches and described a framework that agencies may use to help meet the goals of the directive. It also listed ideas or activities intended to help NARA, agencies, and stakeholders achieve effective federal electronic records management. NARA records management officials described the plan as being a living document and stated that the community of private industry and federal councils intends to continue to revise it as more is learned about automation technologies and additional efforts are made to work toward easier and consistent electronic information management. Moreover, the officials stated that NARA anticipates continuing to work with its stakeholders to identify milestones and tasks intended to, among other things, increase automation, reduce burden on end users, and achieve more consistent and affordable compliance with recordkeeping requirements. If effectively implemented, NARA’s plan could serve as an important tool to aid records management stakeholders’ awareness of recommended approaches for improving automated management of e- mail, social media, and other types of digital record content. NARA Created a Template for Reporting Cloud Initiatives The directive required NARA, by December 31, 2013, to incorporate into existing reporting requirements an annual agency update on new “cloud” initiatives, including a description of how each initiative meets Federal Records Act obligations and the goals outlined in the directive. For the initial report, agencies were to identify any existing uses of cloud services or storage, and the dates of implementation. The Senior Agency Official annual reporting template created by NARA for 2013 included reporting requirements for cloud initiatives. As discussed earlier, NARA disseminated the template to federal agencies via an August 2013 memorandum. NARA’s Feasibility Study Concluded That It Should Not Provide Government-Wide Storage and Management Services for Electronic Records The directive required NARA, by December 31, 2013, to evaluate its feasibility of establishing a secure cloud-based service to store and manage unclassified electronic records on behalf of agencies. Further, the directive stated that this basic, shared service should adhere to NARA’s records management regulations and provide standards and tools to preserve records and make them accessible within their originating agency until NARA performs disposition. In response to this requirement, NARA conducted a study examining the technical feasibility and cost for it to establish a repository and system to store, manage, and dispose of electronic records on behalf of federal agencies. The study included an assessment of secure cloud-based services and the cloud-based data-at-rest model. For the data-at-rest model, the study presupposed an environment where records are managed within the same clouds as agencies’ active business and administrative records. Also, these same clouds would be used for access and preservation of records. According to the study, disposition rules would then be applied where the records are stored because the data sets would be expected to continue to grow in size, thus becoming impractical to physically move them from repository to repository. Additionally, the feasibility study determined that “data at rest” would require procedures, tools, and a processing environment that allows for archival records to be accessioned, preserved, and made publicly available without being physically transferred from their initial host environments. The report anticipated that these types of issues could be effectively managed with the assistance of external service providers. NARA also considered cost factors in conducting the feasibility study and concluded that it should not serve as a direct service provider in assisting agencies with the storage and management of electronic records. Specifically, the study determined that costs were not practical for a secure cloud-based service needed to meet the requirements of a an infrastructure capable of managing large volumes of agency- large user base, the investment required to establish a cloud-based repository, and owned records. Consequently, the study concluded that a more sustainable approach to improving electronic recordkeeping may be to pursue alternative service models where NARA does not store and manage electronic records on behalf of agencies. According to NARA, these services and the agency’s role in providing them could be developed and tested as part of the managing government records directive’s work with automation, open source technology development, and cloud computing. NARA Created Community of Interest Groups to Help Support Electronic Records Management Initiatives By December 31, 2013, NARA, in cooperation with the Federal Chief Information Officers Council, the Federal Records Council, and other government-wide councils that expressed interest, was to establish a community of interest to bring together leaders from the information technology, legal counsel, and records management communities to solve specific records management challenges. In particular, the community of interest group was to develop and propose guidance, share information, create training, and identify tools that support electronic records management. Toward this end, NARA reported that two communities of interest were established: (1) the Electronic Records Management Automation Working Group, established in March 2013; and (2) the Federal Records Officer Network, established in May 2013. According to NARA records management officials, the Electronic Records Management Automation Working Group is made up of 133 members from the information technology, legal counsel, and records management communities in the federal government. The officials stated that, through this working group, records managers, information managers, and IT staff share information with other group members on increasing the automation of electronic records management tasks. Further, the officials stated that the Electronic Records Management Automation Working Group has suggested topics for guidance that NARA could produce, including the disposal of paper records after digitization, required metadata, and auto-categorization. The Federal Records Officer Network has 172 members from various federal agencies and collaborates on projects, shares information, and develops training on records management. According to NARA’s records management officials, the Network, in consultation with NARA, has consolidated records management training materials from multiple agencies into a single e-learning product that agencies can download and use to meet training requirements. NARA records management officials also stated that the Federal Records Officer Network has made suggestions on records management best practices and training projects. Additionally, NARA records management officials stated that the agency has, in cooperation with the Federal Records Council, worked with community of interest groups, including the Electronic Records Management Automation Working Group and the Federal Records Officer Network, to identify tools for records management. By creating communities of interest that proposed guidance, shared information, developed training, and helped to identify tools to support electronic records management, NARA has taken steps toward assisting agencies with records management challenges. NARA Identified and Enhanced an Analytical Tool to Evaluate the Effectiveness of Federal Records Management Programs By December 31, 2013, NARA was required to identify a government- wide analytical tool to evaluate the effectiveness of records management programs. The tool was intended to supplement NARA’s assessments, inspections, and studies of agencies’ records management programs. The tool was also to help NARA and agencies measure program compliance more effectively, assess risks, and aid in agency decision making. In accordance with the directive, by the second quarter of fiscal year 2013, NARA had identified the Records Management Maturity Model Integrated tool developed by the Department of Homeland Security, as the most feasible foundation for a records management solution to evaluate agency records management programs. NARA then created a working group of agency officials from the Federal Records Council to modify the Records Management Maturity Model Integrated tool. The working group members represented six federal agencies: NARA; the Securities and Exchange Commission; and the Departments of Homeland Security, the Interior, Justice, and Transportation. The working group’s efforts resulted in the development of the Federal Records and Information Management Program Maturity Model, a government-wide analytical tool. According to the Federal Records and Information Management Program Maturity Model’s user guide, the purpose of the tool is to help agencies or components assess areas of their records management programs to determine where improvements are most needed. The tool is also intended to measure the maturity of an agency records management program, regardless of the program’s size and records management maturity level. The working group developed organizing principles, assessment criteria, and performance measures for the tool and, as of January 2015, had completed the tool and finalized a guide for its intended users. NARA records management officials stated that the agency presented the Federal Records and Information Management Program Maturity Model tool at its bi-monthly records management meeting in March 2015 and posted the final product on its records management website in April 2015. If the tool and the actions planned by NARA work as intended, they could assist NARA and the agencies in evaluating the effectiveness of agencies’ records management programs and measuring agency compliance. NARA Obtained External Involvement for Development of Open Source Records Management Solutions The directive required NARA to collaborate with the Federal Chief Information Officers Council and the Federal Records Council, and obtain external involvement to develop open source records management solutions by December 31, 2014. To address this requirement, in 2013 and 2014, NARA engaged the Federal Chief Information Officers Council, the Federal Records Council, and the private sector to develop open source records management solutions. For example, as discussed earlier, NARA generated a plan with stakeholder participation that included activities that pertained to the development of open source opportunities. Specifically, the plan, among other things, (1) identified activities that will aid in developing open source records management tools and (2) encourages external involvement to develop open source records management tools. The plan described a NARA activity to identify open source records management tools by compiling a list of available open source tools that could be used for various records management functions. This list and related information would be maintained online as a resource for the federal records management community. In addition, the plan specified that NARA intends to identify gaps in open source records management tools and identify opportunities for external involvement in the development of new records management solutions. Further, to encourage external involvement in the development of open source records management tools, NARA requested information from selected private sector vendors pertaining to cloud-based and open source records management solutions for the federal government. According to NARA officials, their outreach to vendors discovered that many viable automated records management solutions are already on the market, including some open source solutions. Consequently, in collaboration with the Federal Chief Information Officers Council and the Federal Records Council, NARA worked with private industry to help familiarize agencies with existing solutions with the goal of identifying any remaining unmet requirements. For example, NARA Invited presentations to the federal records management community on particular automated solutions and provided a list of questions for vendors to answer about their products during those presentations. Hosted an industry day event on September 10, 2013. At this event, records officers, IT staff, and chief information officers from several agencies, including NARA, discussed with vendors automated electronic records management and the kinds of solutions the agencies were seeking. Published a request for information in FedBizOpps13, 2013, requesting vendor capability statements describing their solutions and services to support automated electronic records management. By April 15, 2014, NARA had received 52 capability statements in response, all of which were shared with the federal records management community through the Electronic Records Management Automation Working Group. Additionally, NARA provided evidence that it worked with other stakeholders, such as in 2013, when it invited volunteers from other federal agencies to share ideas and good practices and lessons learned with each other in the Electronic Records Management Automation Working Group. Further, according to its records management officials, NARA issued the open source records management tools report in March 2015. According to NARA records management officials, the report compiled a list of available open source tools that could be used for various records management functions and maintain the information online as a resource for the federal records management community. This action, coupled with the work involving the private sector and other stakeholders, should assist NARA in identifying and developing open source records management solutions. Conclusions The majority of the 24 federal agencies had taken steps toward addressing the seven directive requirements for managing government records that had completion dates from November 2012 through December 2014. However, certain requirements were not fully met by 5 of the agencies. Specifically, not all agencies had designated Senior Agency Officials at the assistant secretary level; reported to NARA on how they planned to manage permanent electronic records, including e-mails; identified and reported on permanent records that have been in existence for 30 years or more; or identified unscheduled records. Further, the Departments of Commerce, Defense, and Energy had not fully implemented the requirement to develop records management training for all employees, or had not ensured that all agency records officers held the NARA certificate for Federal Records Management Training. However, these 3 agencies indicated that they expect to complete their requirements by the end of fiscal year 2015. Until agencies fully implement the directive requirements, they may not be well-positioned to implement the records management reforms envisioned by the directive. In addition, OPM had finalized the records management occupational series, and OMB had established a deadline for updating key guidance to direct agencies to incorporate records management requirements when moving to a cloud-based service. However, while NARA had taken action to oversee agencies’ directive compliance and identified tools for addressing electronic records management challenges, it had not developed metadata requirements, which are needed to assess progress and streamline agency efforts to process records. Completing this effort could provide agencies with resources for more efficiently managing their records. Recommendations for Executive Action To help ensure that directive requirements are met, we are making 10 recommendations to specific agencies and NARA. We recommend that the Director of the Office of Personnel Management take the following action: Ensure that the Senior Agency Official designated to oversee the agency’s compliance with records management statutes and regulations is at or equivalent to the level of an assistant secretary, as required by the directive. We recommend that the Secretary of Veterans Affairs take the Designate a Senior Agency Official at or equivalent to the level of assistant secretary who has direct responsibility for ensuring that the agency complies with applicable records management statutes, regulations, and NARA policy, including being able to make adjustments to agency practices, personnel, and funding. We recommend that the Secretary of Transportation take the following Identify permanent records that were in existence for 30 years or more for one remaining component and report this information to NARA. We recommend that the Administrator of General Services Administration take the following action: Expedite efforts to ensure that permanent records that were in existence for 30 years or more, including records stored at agency records storage facilities, are identified and reported to NARA. We recommend that the Director of the National Science Foundation take the following four actions: Establish a date by which the agency will complete, and then report to NARA, its plans for managing permanent records electronically. The plan should describe, among other things, how permanent electronic records are currently captured, retained, searched, and retrieved; plans to digitize permanent records currently in hard-copy format or other analog formats; plans to manage all permanent electronic records in electronic format, including how the plans will be implemented; and challenges the agency faced in achieving the requirement of managing all permanent electronic records in an electronic format. Establish a date by which the agency will complete, and then report to NARA on, its progress toward managing permanent and temporary e-mail records in an electronic format, to include the agency’s ability to retain e-mail records in an electronic system that supports records management and litigation requirements, including the capability to identify, retrieve, and retain the records for as long as they are needed. Report to NARA on the identification of its permanent records in existence for 30 years or more, to include when no such records exist. Complete the identification of unscheduled records stored at agency records storage facilities. We recommend that the Archivist of the United States take the Establish a time frame and revise NARA transfer guidance for permanent electronic records to include all aspects of metadata requirements. Identify tools to assist agencies with addressing records management challenges in cooperation with the Federal Chief Information Officers Council, the Federal Records Council, and other government-wide councils that express interest. Agency Comments and Our Evaluation We requested comments on a draft of this report from the 24 major agencies included in our study and from OMB and NARA. We received comments from the six agencies to which we made recommendations, which included NARA. Among these, OPM, the Department of Veterans Affairs, the General Services Administration, the National Science Foundation, and NARA provided written comments. Further, on May 5, 2015, the Deputy Director of Audit Relations for the Department of Transportation provided comments via e-mail. Four of the agencies and NARA either agreed or generally agreed with our recommendations, while one agency had no comments, as summarized below: The Chief Operating Officer for OPM stated that the agency concurred with our recommendation and plans to designate its Chief Information Officer as the Senior Agency Official. According to the Chief Operating Officer, the agency’s Chief Information Officer is the equivalent of an assistant secretary, and is appropriately located within OPM to make adjustments to the agency’s practices, personnel, and funding to ensure compliance and support the business needs of OPM. The official added that the Chief Information Officer has direct responsibility for ensuring that OPM efficiently and appropriately complies with all applicable records management statutes, regulations, and NARA policy. OPM’s comments are reprinted in appendix II. The Department of Veterans Affairs’ Chief of Staff stated that the department concurred with our recommendation. The Chief of Staff added that the department plans to designate its Chief Information Officer as the Senior Agency Official, with delegation of daily responsibility for complying with applicable records management statutes, regulations, and NARA policy to the Associate Deputy Assistant Secretary for Policy, Privacy, and Incident Response. The department’s comments are reprinted in appendix III. The Acting Administrator of the General Services Administration stated that the agency concurred with and is developing a plan to address our recommendation. The Acting Administrator further stated that the agency would accelerate efforts to identify the location of its records by the end of fiscal year 2015. The agency’s comments are reprinted in appendix IV. The National Science Foundation’s Chief Information Officer stated that the agency had no comments on the draft report but is committed to the continual improvement of information technology management, including its efforts related to records management. The agency’s comments are reprinted in appendix V. In its comments, the Archivist of the United States said that NARA concurred with the recommendation to establish a time frame and revise transfer guidance for permanent electronic records to include all aspects of metadata requirements. The Archivist added that NARA believed it had met the second recommendation related to identifying tools to assist agencies with addressing records management challenges. In this regard, NARA provided us with evidence supporting its identification of tools, and in response we updated our report to reflect the actions taken. As an additional comment, the Archivist expressed concern that the report did not include a recommendation for NARA to revisit its guidance for the Senior Agency Official’s roles, responsibilities, and overall designation, especially as it pertains to independent agencies. The Archivist believed such a recommendation would further empower Senior Agency Officials within their component agencies. With regard to this comment, we believe clearly designated roles and responsibilities are important to ensuring the effectiveness of all agencies’ Senior Agency Officials and that NARA has taken an important step in recognizing its need to revisit guidance for independent agencies. As for the study results and recommendations included in this report, our work focused on the actions of NARA and the 24 major federal agencies to implement the specific requirements outlined in the Managing Government Records directive. NARA’s comments are reprinted in appendix VI. In comments provided via email, the Deputy Director of Audit Relations stated that the Department of Transportation concurred with our recommendation and that the agency would provide a detailed response to the recommendation within 60 days of our report’s issuance. We also received written comments from the Department of Defense (reprinted in appendix VII) and the Social Security Administration (reprinted in appendix VIII). In the comments, the Principal Deputy for the Department of Defense stated that the department concurred with the report as written. The Executive Counselor to the Commissioner of the Social Security Administration stated that the agency had no comments on the draft report. Further, we received technical comments via e-mail from the Department of Justice, NARA, and the National Science Foundation, which we have incorporated, as appropriate. In addition to the aforementioned comments, liaisons for 15 other agencies sent e-mails stating that their agencies had no comments on the draft report. These agencies were the Departments of Agriculture, Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, the Interior, Labor, State, and Treasury; the Environmental Protection Agency; National Aeronautics and Space Administration; Small Business Administration; U.S. Agency for International Development; and Nuclear Regulatory Commission. Two agencies—the Department of Homeland Security, and the Office of Management and Budget—did not provide any responses to our request for comments. We are sending copies of this report to the Secretaries of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, State, Transportation, the Treasury, and Veterans Affairs; the Attorney General; the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and the U.S. Agency for International Development; the Archivist of the United States; the Directors of the National Science Foundation, Office of Management and Budget, and Office of Personnel Management; the Chairman of the Nuclear Regulatory Commission; the Commissioner of Social Security; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions on information discussed in this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) assess the extent to which federal agencies have taken the actions called for in the Office of Management and Budget (OMB) and National Archives and Records Administration (NARA) Managing Government Records Directive, and (2) determine the extent to which the Office of Personnel Management (OPM), OMB, and NARA have taken actions called for in the directive, including overseeing agencies’ compliance. The scope of our review included the 24 major agencies covered by the Chief Financial Officers Act of 1990, as well as OMB and NARA. To address the first objective, we took the following steps for each of the 24 agencies: Compared agency documentation, such as records management policy and departmental regulations to the requirements specified in the directive that were required to be completed by the December 31, 2014, deadline. These requirements pertained to: (1) designating a senior agency official, (2) managing permanent electronic records, (3) managing permanent and temporary e-mail records, (4) identifying permanent records and reporting on that information to NARA, (5) identifying unscheduled records, (6) obtaining the NARA certificate of Federal Records Management Training, and (7) establishing records management training. Obtained and reviewed records management policies, procedures, and guidance. Collected and analyzed documentation that described actions each agency had taken to meet requirements of the directive, such as the annual records management self assessment and Senior Agency Official report. Conducted structured interviews with records management officials from each agency to discuss steps taken to address directive areas and obtain additional supporting documentation to determine the agencies’ status in implementing the directive requirements. Followed up with those agencies that did not fully meet the directive requirements to determine reasons for their noncompliance. For the second objective, regarding NARA’s, OPM’s, and OMB’s implementation of their responsibilities under the directive, we took the following steps: Collected and analyzed documentation on senior agency official meetings held by NARA and records management communities of interest and the Federal Records and Information Management Program Maturity Model tool to evaluate agencies’ records management programs. Obtained and analyzed NARA documentation, to include transfer guidance for permanent electronic records; guidance for managing, disposing of, and transferring e-mail; the Senior Agency Official template; and the results of a feasibility study on establishing a cloud- based service. Obtained and reviewed NARA’s records management policies, plans, and other documentation related to electronic recordkeeping. Conducted structured interviews with NARA’s Chief Records Officer and other agency officials regarding their interactions with the 24 agencies on the use of electronic recordkeeping and implementation of federal records management policies and practices. Interviewed OPM’s Chief of Records Management and other agency officials to discuss the development of the records management occupational series, and obtained and evaluated related documentation. Interviewed officials within OMB’s Offices of Information Regulatory Affairs and E-Government & Information Technology to discuss OMB’s efforts to update Circular A-130 and actions taken to assist agencies with meeting the goals of the records management directive. To assess the reliability of what agency officials told us about how they met the requirements specified in the directive, we collected and analyzed documentation from the 24 agencies to determine the steps that each agency had taken to meet the requirements of the directive. We also collected and reviewed documentation that NARA provided regarding the status of agencies’ implementation of the directive areas. Our study was conducted to determine whether the agencies in our review had complied with requirements of the directive agency-wide, and did not include a comprehensive assessment of all actions that agencies may have taken to carry out responsibilities at the branch or sub-agency levels. We conducted this performance audit from March 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Office of Personnel Management Appendix III: Comments from the Department of Veterans Affairs Appendix IV: Comments from the General Services Administration Appendix V: Comments from the National Science Foundation Appendix VI: Comments from the National Archives and Records Administration Appendix VII: Comments from the Department of Defense Appendix VIII: Comments from the Social Security Administration Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following staff made significant contributions to this report: Anjalique Lawrence, Assistant Director; Sharhonda Deloach; Elena Epps; Angel Ip; Lee McCracken; and Robert Williams.
The federal government collects large amounts of information, increasingly in electronic form, to accomplish its missions. This greater reliance on electronic communication and information technology systems has, as a result, radically increased the information that agencies must manage. In 2012, NARA and OMB issued a directive to reform federal records management in response to a 2011 presidential memorandum on managing government records. The directive requires federal agencies, NARA, OMB, and OPM to take actions toward reforming records management policies and practices. GAO was requested to evaluate federal agencies' implementation of the directive. GAO's objectives were to (1) assess the extent to which federal agencies have taken the actions called for in the directive and (2) determine the extent to which OPM, OMB, and NARA have taken actions called for in the directive. To do this, GAO reviewed policies, guidance, and other documentation of actions taken through December 31, 2014, by 24 selected federal agencies, NARA, and OMB, and interviewed the agencies' records management officials. The 24 federal agencies took actions toward implementing each of the seven requirements set forth in the National Archives and Records Administration (NARA) and Office of Management and Budget (OMB) directive on managing government records (see table). However, certain requirements were not fully met by 5 of the agencies because these agencies were either still working on addressing the requirement, or did not view the requirement as mandatory. For example, while all 24 agencies designated a senior official to oversee records management, 2 did not designate the official at the assistant secretary level, and 1 did not reaffirm the official by the specified deadline. Further, at 2 agencies, records management officers did not obtain the NARA training certificate or had not been granted an exemption. These agencies expect to complete their training by the end of fiscal year 2015. The Office of Personnel Management (OPM), OMB, and NARA took steps to implement 11 required oversight actions, although not all actions had been completed. For example, OPM finalized an occupational series to elevate records management roles, responsibilities, and skill sets for agency records professionals. In addition, OMB was in the process of updating its Circular A-130 to include records management requirements for agencies when moving to cloud-based services or storage solutions. The agency expects to release the updated circular by December 2015. Lastly, NARA, in consultation with other stakeholders, produced a plan to move agencies toward greater automation of records management. However, it did not include metadata requirements in its guidance, as required. Until agencies, OMB, and NARA fully implement the directive's requirements, the federal government may be hindered in its efforts to improve performance and promote openness and accountability through the reform of records management.
DOT’s Leadership Role in Surface Transportation Research ISTEA expressed the need for a new direction in surface transportation research, finding that despite an annual federal expenditure of more than $10 billion on surface transportation and its infrastructure, the federal government lacked a clear vision of the role of federally funded surface transportation research and an integrated framework for the fragmented surface transportation research programs dispersed throughout the government. The act recognized the federal government as a critical sponsor and coordinator of new technologies that would provide safer, more convenient, and more affordable future transportation systems. Our September 1996 report on surface transportation research confirmed what ISTEA stressed—DOT must play a critical role in surface transportation research. DOT’s role as the leader in surface transportation research stems from the Department’s national perspective, which transcends the interests and limitations of nonfederal stakeholders. For example, the states generally focus on applied research to solve specific problems; industry funds research to develop new or expanded markets; and universities train future transportation specialists and conduct research that reflects the interests of their funders. While the Department has established councils and committees to coordinate its research, the lack of a departmental focal point and an inadequate strategic plan may limit its leadership role. First, surface transportation research within the Department is focused on improving individual modes of transportation rather than on creating an integrated framework for surface transportation research. This modal structure makes it difficult for DOT to develop a surface transportation system mission; accommodate the need for types of research—such as intermodal and systems assessment research—that do not have a modal focus; and identify and coordinate research that cuts across modes. Second, DOT does not have a Department-level focal point to oversee its research, such as an Assistant Secretary for Research and Development. Instead an Associate Administrator of the Research and Special Projects Administration (RSPA) coordinates the Department’s surface research programs. Although RSPA was established to foster cross-cutting research, it does not have the funding resources or the internal clout to function effectively as a strategic planner for surface transportation research. RSPA acts in an advisory capacity and has no control over the modal agencies’ budgets or policies. Finally, the Department does not have an integrated framework for surface transportation research. The three research plans that the Department has submitted to the Congress since 1993 are useful inventories of the five modal agencies’ research activities. However, the plans cannot be used, as ISTEA directed, to make surface transportation research more strategic, integrated, and focused. Until all these issues are addressed, the Department may not be able to respond to ISTEA’s call for an integrated framework for surface transportation research and assume a leadership role in surface research. ITS Program Holds Potential for Innovation If Deployment Obstacles Can Be Resolved ISTEA also reflected congressional concerns about the adequacy of the funding for advanced transportation systems, suggesting that too little funding would increase the nation’s dependence on foreign technologies and equipment. The act therefore increased the funding for many existing and new research programs, especially for the ITS program. Since 1992, the ITS program has received through contract authority and the annual appropriations process about $1.3 billion. This amount represents about 36 percent of the $3.5 billion the federal government provided for surface research programs from 1992 to 1997. Our February 1997 report examined the progress made in deploying ITS technologies and ways in which the federal government could facilitate further deployment. On the first issue, a 1995 DOT-funded study found that 7 of 10 larger urban areas were using some ITS technologies to help solve their transportation problems. An example of an area that has widely deployed ITS technologies is Minneapolis. The Minneapolis ITS program, part of the state’s “Guidestar” program, first began operational tests in 1991. Since that time, about $64 million in public and private funds have been invested in Guidestar projects. With these funds, Minneapolis upgraded its traffic management center so that it could better monitor traffic flow and roadway conditions and installed ramp meters to control the flow of traffic entering the expressways. These improvements have helped increase average highway speeds during rush hour by 35 percent. Although urban areas are deploying individual ITS components, we found that states and localities are not integrating the various ITS components so that they work together and thereby maximize the overall efficiency of the entire transportation system. For example, transportation officials in the Washington, D.C., area said that local jurisdictions have installed electronic toll collection, traveler information, and highway surveillance systems without integrating the components into a multimodal system. This lack of systems integration is due in part to the fact that ITS is a relatively new program that is still evolving and has yet to fully implement some fundamental program components such as the national architecture and technical standards. The national architecture, which identifies the components and functions of an ITS system, was completed in July 1996. In addition, a five year effort to develop technical standards—which specify how system components will communicate—is planned for completion in 2001. We also found that the lack of widespread deployment of integrated ITS systems results from insufficient knowledge of ITS systems among state and local transportation agencies; limited data on the costs and benefits of ITS; and inadequate funding in light of other transportation investment priorities. The funding issue is particularly important since DOT has changed the program’s short-term focus to include a greater emphasis on deploying ITS technologies rather than simply conducting research and operational tests. The federal government’s future commitment to a deployment program would have to balance the need to continue progress made under the program with federal budgetary constraints. Urban transportation officials in the nation’s 10 largest cities we interviewed had mixed views on an appropriate federal role for funding ITS deployment. Officials in 6 of 10 urban areas supported a large federal commitment of $1 billion each year. Typically, these officials contended that future ITS deployments would be limited without specific funding for this approach. For example, a New York transportation planner said that without large-scale funding, ITS investment would have to compete for scarce dollars with higher-priority road and bridge rehabilitation projects. Under such a scenario, plans for deploying ITS would be delayed. These officials also favored new federal funding rather than a set-aside of existing federal-aid highway dollars. In contrast, officials from four other urban areas opposed a large-scale federal aid program because they do not want additional federal funding categories. Some of these officials also said that such a program could drive unnecessary ITS investments, as decisionmakers chased ITS capital money, even though another solution might have been more cost-effective. One official noted that a large federal program would be very premature since the benefits of many ITS applications have yet to be proven despite the claims of ITS proponents. In the absence of a large federal program, officials from 5 of the 10 urban areas supported a smaller-scale federal seed program. They said that such a program could be used to fund experimental ITS applications, promote better working relationships among key agencies, or support information systems for travellers. Deliberations on the future funding for the ITS program should include an assessment of the current obstacles facing the program. First, the system architecture is relatively new, and state and local officials have limited knowledge of its importance. Second, it will take time for state and local transportation officials to understand the architecture and supplement their traditional approach to solving transportation problems through civil engineering strategies with the information management and telecommunications focus envisioned by an integrated ITS approach. In addition, widespread integrated deployment cannot occur without the technical standards that DOT proposes to complete over the next 5 years. Innovative Financing Through State Infrastructure Banks Until recently, states have generally not been able to tailor federal highway funding to a form other than a grant. The National Highway System Designation Act of 1995 established a number of innovative financing mechanisms, including the authorization of a SIB Pilot Program for up to 10 states or multistate applicants—8 states were selected in April 1996 and 2 were selected in June 1996. Under this program, states can use up to 10 percent of most of their fiscal years 1996 and 1997 federal highway funds to establish their SIBs. This program was expanded by DOT’s fiscal year 1997 appropriations act that removed the 10-state limit and provided $150 million in new funds. A SIB serves essentially as an umbrella under which a variety of innovative finance techniques can be implemented. Much like a bank, a SIB would need equity capital to get started, and equity capital could be provided at least in part through federal highway funds. Once capitalized, the SIB could offer a range of loans and credit options, such as loan guarantees and lines of credit. For example, through a revolving fund, states could lend money to public or private sponsors of transportation projects. Project-based revenues, such as tolls, or general revenues, such as dedicated taxes, could be used to repay loans with interest, and the repayments would replenish the fund so that new loans could be supported. Thus projects with potential revenue streams will be needed to make a SIB viable. Expected assistance for some of the projects in the initial 10 states selected for the pilot program include loans ranging from $60,000 to $30 million, credit enhancement to support bonds and a line of credit. In some cases, large projects that are already underway may be helped through SIB financial assistance. Examples of projects states are considering for financial assistance include: A $713 million project in Orange County, California, that includes construction of a 24-mile tollway. SIB assistance in the form of a $25 million line of credit may be used for this project to replace an existing contingency fund. If accessed, the plan is that the line of credit would be repaid through excess toll revenues. A $240 million project in Orlando, Florida, will involve construction of a 6 mile-segment to complete a 56-mile beltway. A SIB project loan in the amount of $20 million is being considered, and loan repayment would come from a mix of project and systemwide toll receipts and state transportation funds. In Myrtle Beach, South Carolina, a SIB loan is being considered to help with the construction of a $15 million new bridge to Fantasy Harbor. The source for repayment of the loan would be proceeds from an admission tax at the Fantasy Harbor entertainment complex. These examples represent but a few of the projects being considered for SIB assistance by the initial 10 SIB pilot states. SIB financial assistance is intended to complement, not replace, traditional transportation grant programs and provide states increased flexibility to offer many types of financial assistance. As a result, projects could be completed more quickly, some projects could be built that would otherwise be delayed or infeasible if conventional federal grants were used, and private investment in transportation could be increased. Furthermore, a longer-term anticipated benefit is that repaid SIB loans can be “recycled” as a source of funds for future transportation projects. If states choose to leverage SIB funds, DOT has estimated that $2 billion in federal capital provided through SIBs could be expected to attract an additional $4 billion for transportation investments. For some states, barriers to establishing and effectively using a SIB still remain. One example is the low number of projects that could generate revenue and thus repay loans made by SIBs. Six of the states that we surveyed told us that an insufficient number of projects with a potential revenue stream would diminish the prospects that their state would participate in the SIB pilot program. Ten of 11 states that we talked with about this issue said they were considering tolls as a revenue source. However, state officials also told us that they expected tolls would generate considerable negative reaction from political officials and the general public. Some states expressed uncertainty regarding their legal or constitutional authority to establish a SIB in their state or use some financing options that would involve the private sector. Michigan, for instance, said that it does not currently have the constitutional authority to lend money to the private sector. Since $150 million was appropriated for fiscal year 1997 and the 10 state restriction was lifted, DOT has received applications from 28 additional states. DOT has not yet selected additional states for the program. In addition, DOT has not yet developed criteria or a mechanism for determining how the funds will be distributed to selected states. The SIB program has been slow to start-up. Only one state—Ohio—has actually begun a toll road project under its SIB since April 1996 when the first states were selected for the program. The program will need time to develop and mature. Innovative Practices Using Design-Build Contracting Innovation can also occur through different methods to design and construct transportation projects. Of particular note is FHWA’s special project to test and evaluate the use of design-build contracting methods under the agency’s authority to conduct research. The project is an outgrowth of a 1987 Transportation Research Board task force report that identified innovative contracting practices such as design-build. The design-build method differs from the traditional design-bid-build method since it combines, rather than separates responsibility for the design and construction phase of a highway project. Proponents of design-build have identified several benefits. First, the highway agency can hold one contractor, rather than two or more, accountable for the quality and costs of the project. This compares to the traditional approach where problems with the project resulted in disputes between the design and construction firms. Second, by working together from the beginning, the designer and builder would have a firmer understanding of the project costs and could thereby reduce costs by incorporating value engineering savings into the design. Finally, design-build proponents state the approach will reduce administrative burden and expenses because fewer contracts would be needed. State interest in the design-build contracting approach is rising. According to FHWA, as of January 1997, 13 states have initiated at least 50 design-build projects under the agency’s special program. The size of state projects varies considerably, from bridge projects costing a few million dollars to the $1.4 billion reconstruction of I-15 in Utah. While states are becoming more receptive to design-build contracting, FHWA still considers the approach experimental, and an overall assessment of the broad benefits, costs, and applicability of design-build remains limited by the small number of completed projects. One difficulty in implementing design-build lies in state laws limiting its use. A 1996 Design-Build Institute of America survey of state procurement laws documents this problem. The survey identified 17 states that did not permit the use of combined design and construction contracts. In addition, a 1995 Study by the Building Futures Council noted that some states indirectly preclude design-build by requiring separation of design and construction services—construction services being awarded to the lowest bidder only after the design is complete. In addition, similar requirements applicable to state highway construction contracts under the federal-aid highway program limit FHWA’s authority to allow design-build contracts outside those that are part of its special project. However, an official within FHWA’s Office of Engineering suggested that continuing the current special project may be appropriate because no consensus exists within the highway construction industry on the desirability of the design-build approach. A final consideration that may limit the use of design-build contracting is project financing. When design-build is applied to expensive, large infrastructure projects, financing can be more complex because the projects are constructed faster than under conventional contracting practices. Faster construction means that funds will be required faster, which may pose difficulties if the project’s revenue stream does not keep pace. For example, in our review of a large design-build transit project, the extension of the Bay Area Rapid Transit (BART) system to the San Francisco International Airport, we found that BART required a borrowing program to cover cash shortfalls during construction. With design-build, BART may save construction costs but will incur additional financing costs. Design-build contracting, while becoming increasingly common in the private sector for facilities such as industrial plants and refineries, does not yet have an established track record in transportation in the United States. However, the experiences now being gained through the 50 projects under FHWA’s special project, along with four Federal Transit Administration funded demonstration projects, may provide sufficient evidence of the efficacy of design-build. Early experience suggests that in instances when time is at a premium, and project revenue sources quickly cover construction costs, design-build may provide a good fit with project requirements. One area where these opportunities may exist is FHWA’s Emergency Relief Program, which places emphasis on the quick reconstruction of damaged facilities. Mr. Chairman, this concludes our prepared statement on the potential benefits and challenges of four examples of innovation in surface transportation research, finance and contracting. We will be happy to respond to any questions you might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed how innovation in federal research, financing and contracting methods has the potential for improving the performance of the nation's surface transportation system, focusing on three reports it completed for the Senate Committee on Environment and Public Works' deliberations on the reauthorization of the Intermodal Surface Transportation Efficiency Act (ISTEA). GAO noted that: (1) investments in surface transportation research have provided benefits to users and the economy; (2) the Department of Transportation (DOT) has a critical role to play by funding research, establishing an overall research mission with objectives for accomplishment and priorities for allocating funds, and acting as a focal point for technology transfer; (3) DOT's organizational structure and lack of both a strategic plan and a departmental focal point may limit its impact on research; (4) until these issues are addressed, DOT may not be able to respond to ISTEA's call for an integrated framework for surface transportation research; (5) DOT's Intelligent Transportation System (ITS) Program has received $1.3 billion to advance the use of computer and telecommunications technology that will enhance the safety and efficiency of surface transportation; (6) although the program envisioned widespread deployment of integrated multimodal ITS systems, this vision has not been realized for several reasons: (a) the ITS national architecture was not completed until July 1996 and ITS technical standards will not be completed until 2001; and (b) the lack of knowledge of ITS technologies and systems integration among state and local officials, insufficient data documenting the cost effectiveness of ITS in solving transportation problems and competing priorities for limited transportation dollars will further constrain widespread ITS deployment; (7) before DOT can aggressively pursue widespread deployment of integrated ITS, it must help state and local official overcome these obstacles; (8) State Infrastructure Banks (SIBs) offer the promise of helping to close the gap between transportation needs and available resources by sustaining and potentially expanding a fixed sum of federal capital, often by attracting private investment; (9) specifically, these banks provide states increased flexibility to offer may types of financial assistance; (10) some state officials and industry experts that GAO talked with remain skeptical that SIBs will produce the expected benefits; (11) the Federal Highway Administration (FHwA) is testing and evaluating the use of an innovative design-build contracting method for highway construction; (12) proponents of design-build see several advantages to the approach; however, FHwA's authority to implement design-build is limited and 17 states have laws which, in effect, prevent the use of design-build; and (13) while design-build may result in the faster completion of projects, it may also require an accelerated revenue stream to pay for construction.
Background VA operates the largest integrated health care system in the United States, providing care to nearly 5 million veterans per year. The VA health care system consists of hospitals, ambulatory clinics, nursing homes, residential rehabilitation treatment programs, and readjustment counseling centers. In addition to providing medical care, VA is the largest educator of health care professionals, training more than 28,000 medical residents annually, as well as other types of trainees. State Licenses and National Certificates VA requires its health care practitioners to have professional credentials in their specific professions through either state licenses or national certificates. VA policy requires officials at its medical facilities to screen each applicant for positions at VA to determine whether the applicant possesses at least one current and unrestricted state license or an appropriate national certificate, whichever is applicable for the position sought by the applicant. VA also requires officials at its medical facilities to periodically verify licenses or national certificates held by health care practitioners already employed at VA. In general, for both applicants and employed health care practitioners, VA’s employment screening process proceeds in two stages. First, applicants and employed health care practitioners are required to disclose to VA, if applicable, their state licenses and national certificates. Applicants disclose their credentials to VA during the application process, and employed health care practitioners disclose credentials to VA as they expire and are renewed with the state licensing board or certifying organization. Second, VA facility officials are required to check whether the disclosed credentials are valid. State licenses are issued by state licensing boards, whereas national certificates are issued by national certifying organizations, which are separate and independent from state licensing boards. Both state licensing boards and national certifying organizations establish requirements that practitioners must meet to be licensed or certified. Licensed practitioners may be licensed in more than one state. “Current and unrestricted licenses” are licenses that are valid and in good standing in the state where issued. To keep a license current, practitioners must renew their licenses before they expire. When licensing boards discover a licensee is in violation of licensing requirements or established law, for example, abusing prescription drugs or intentionally or negligently providing poor quality care that results in adverse health effects, they may place restrictions on or revoke a license. Restrictions from a state licensing board can limit or prohibit a practitioner from practicing in that particular state. Some, but not all, state licenses are marked to indicate whether the licenses have had restrictions placed on them. Practitioners, such as respiratory and occupational therapists, who are required to have national certificates to work at VA, must have current and unrestricted certificates. National certifying organizations can restrict or revoke certificates for violations of the organizations’ professional standards. Generally, each state licensing board and national certifying organization maintains a database of information on restrictions, which employers can often obtain at no cost either by accessing the information on a board’s Web site or by contacting the board directly. Background Investigations In addition to holding valid professional credentials, when hired, health care practitioners are required to undergo background investigations that verify their personal and professional histories. Depending on the position, the extent of the background investigations for health care practitioners varies. For example, the minimum background investigation is a fingerprint-only investigation, which compares a practitioner’s fingerprints to those stored in criminal history databases. A traditional background investigation, which covers a health care practitioner’s personal and professional background for up to 10 years, is the most common type of background investigation conducted by VA on its health care practitioners. The traditional background investigation verifies an individual’s history of employment, education, and residence, and includes a fingerprint check against a criminal-history database. The Office of Personnel Management conducts background investigations for VA. To determine the level of background investigation required for employment, VA facility officials are required to complete VA Form 2280, which documents the level of risk posed by a particular position. Physician Credentialing and Privileging For physicians, VA has specific requirements that facility officials must follow to credential and privilege physicians. Officials must follow these requirements when physicians initially apply to work in VA—which is known as initial appointment—and then again at least every 2 years when physicians must apply for reappointment in order to renew their clinical privileges. Prior to working at VA, physicians enter into VetPro, a Web- based credentialing system VA implemented in March 2001, information used by VA medical facility officials in the credentialing process. For example, physicians enter information on their involvement in VA and non-VA medical malpractice claims and their medical education and training. For their reappointments, physicians must update this credentialing information in VetPro. A facility’s medical staff specialist then performs a data check to be sure that all required information has been entered into VetPro. In general, the medical staff specialist at each VA medical facility manages the accuracy of VetPro’s credentialing data. The medical staff specialist verifies, with the original source of the information, the accuracy of the credentialing information entered by the physicians. Once a physician’s credentialing information has been verified, the medical staff specialist sends the information to the physician’s supervisor, known as a clinical service chief. In addition to entering credentialing information into VetPro, physicians complete written requests for clinical privileges. The facility medical staff specialist provides a physician’s clinical service chief with the requested clinical privileges and information needed to complete the privileging process, including information that indicates that the credentialing information entered by the physician into VetPro has been verified with the appropriate sources. The requested clinical privileges are reviewed by the clinical service chief, who recommends whether a physician should be appointed or reappointed to the facility’s medical staff and which clinical privileges should be granted. For reappointment only, VA’s policy requires that information on a physician’s performance, such as a physician’s surgical complication rate, be used when deciding whether to renew a physician’s clinical privileges. Based on the physician’s performance information, the clinical service chief recommends that clinical privileges previously granted by the facility remain the same, be reduced, or be revoked, and whether newly requested privileges should be added. The 2-year period for renewal of clinical privileges and reappointment to the medical staff begins on the date that the privileges are approved by the medical facility’s director. VA Has Taken Steps to Improve Employment Screening Requirements, but Gaps Remain VA has taken steps to improve employment screening of its health care practitioners by partially implementing each of the four recommendations made in our March 2004 report; however, gaps still remain in VA’s health care practitioner screening requirements. To address our recommendation that VA facility officials contact state licensing boards and national certifying organizations to verify all licenses and certificates held by all VA health care practitioners, VA expanded its verification requirement to include licenses and certificates for all prospective hires but did not extend this requirement to include all practitioners currently employed by VA. For those currently employed, such as nurses and pharmacists, VA only required facility officials to physically inspect one license of a practitioner’s choosing. Physical inspection of a license cannot ensure that it is valid and without restriction, nor can it ensure that there are not other licenses from other states that may have restrictions. Checking all licenses against state records is the only way to identify practitioners with restricted licenses. We reviewed a draft of a VA policy that if issued in its current form would fully address our recommendation to require medical facility officials to verify all state licenses and national certificates of currently employed health care practitioners. According to a VA official, this policy is expected to be issued in June 2006. To address our second recommendation that VA query the Department of Health and Human Services’ (HHS) Healthcare Integrity and Protection Data Bank (HIPDB) for all licensed health care practitioners that VA intends to hire and periodically query it for those already employed, VA in July 2004 directed facility officials to query HIPDB for all applicants for VA employment. However, officials were not directed to periodically query HIPDB for health care practitioners currently employed by VA. Officials told us that VA is working with HHS to develop a process whereby VA can electronically query HIPDB for current VA employees. Once this process is in place, and VA is using it to periodically query HIPDB for those currently employed at VA, the department will have fully implemented our recommendation. However, VA did not provide a time frame for implementing this electronic query of HIPDB. To address our third recommendation that VA expand the use of fingerprint-only background investigations for all practitioners with direct access to patients, VA issued a policy that required all VA medical facilities to begin using electronic fingerprint machines by September 1, 2005. By February 1, 2006, all but two facilities had obtained the equipment necessary to implement this requirement. To address our fourth recommendation concerning oversight of the screening requirements, VA formalized an oversight program within its Office of Human Resource Management to include a review of some aspects of the screening process for applicants and current employees. However, the oversight program does not ensure that facilities are complying with all of VA’s key screening requirements, as we recommended. For example, officials from the oversight program are not required to check personnel files to ensure that facility officials query HIPDB and verify all health care practitioners’ licenses and certifications with the relevant issuing organizations. VA Facilities Did Not Comply with Employment Screening Requirements for Practitioners For the seven VA facilities we visited to determine compliance with employment screening requirements for practitioners, we found poor compliance with four of the five requirements we selected for review. Two of these five requirements VA implemented since our March 2004 report— for individuals VA intends to hire, query HIPDB and use an employment checklist to document the completion of employment screening requirements. Three other employment screening requirements were long- standing—verify health care practitioners’ state licenses and national certificates; complete VA Form 2280, which is used to determine the appropriate type of background investigation needed for each health care practitioner job category; and conduct background investigations. In order to show the variability in the level of compliance among the facilities, we measured their performance against a compliance rate of at least 90 percent for each of the screening requirements, even though VA policy requires 100 percent compliance with these requirements. None of the facilities had a compliance rate of 90 percent or more for all screening requirements we reviewed. Table 1 summarizes the rate of compliance among the seven facilities. As shown in table 1, while two facilities performed HIPDB queries on individuals they intended to hire, one of these facilities completed the queries immediately prior to our visit and not at the time the individuals were hired. We also found that two facilities had created their own employment checklists, but had not included all of the screening requirements contained in the original checklist issued by VA. As a result, these facilities were not in compliance with VA’s requirement. Physician Files at Facilities Demonstrated Compliance with Almost All Selected Credentialing and Privileging Requirements; Not All Facilities Submitted Paid Malpractice Claim Information in a Timely Manner We found that the physician files at the facilities we visited demonstrated compliance with four VA credentialing and four privileging requirements we reviewed. However, we found that there were problems complying with a fifth privileging requirement—to use information on a physician’s performance in making privileging decisions. In addition, we found that three of the seven medical facilities we visited did not submit to VA’s Office of Medical-Legal Affairs information on paid VA medical malpractice claims within 60 days after being notified that a claim was paid, as required by VA policy. Selected Physician Files at Facilities Demonstrated Compliance with Four VA Credentialing and Four Privileging Requirements; a Fifth Privileging Requirement Was Problematic We found that the physician files at the facilities we visited demonstrated compliance with four VA credentialing and four privileging requirements we reviewed. For the physician files we reviewed, the VA facilities’ medical staff specialists contacted state licensing boards to ascertain the status of the state medical licenses held and disclosed by their physicians. They also queried the Federation of State Medical Boards (FSMB) database, as required, to obtain additional information on the status of physicians’ medical licenses, including those that may not have been disclosed by physicians. Medical staff specialists complied with the requirement to contact sources, such as courts of jurisdiction, to verify information on physicians’ involvement in medical malpractice claims, including ongoing claims, disclosed by physicians. Additionally, in all cases medical staff specialists queried the National Practitioner Data Bank (NPDB) to identify those physicians who have been involved in paid medical malpractice claims, including any physicians who failed to disclose involvement in such claims. The physician files also demonstrated compliance with four of VA’s privileging requirements. Medical staff specialists verified physicians’ state licenses and the information disclosed by physicians about their involvement in medical malpractice allegations or paid claims, which are both credentialing and privileging requirements. We also found that medical staff specialists verified that physicians had the necessary training and experience to deliver health care and perform the clinical privileges physicians requested. Additionally, after medical staff specialists performed their verification, clinical service chiefs reviewed this information, as required, along with information on physicians’ health status. While we found evidence demonstrating compliance with four of VA’s privileging requirements, the files we reviewed showed that there were problems complying with a fifth privileging requirement that is used only in the renewal of privileges—to use information on a physician’s performance in making privileging decisions. VA requires that during the renewal of a physician’s clinical privileges, VA clinical service chiefs use information on a physician’s performance to support, reduce, or revoke the clinical privileges the physician has requested. However, as stated in VA policy, physician performance information that is collected as part of a facility’s quality assurance program cannot be used in a facility’s privileging process. According to VA, the confidentiality of individual performance information helps ensure practitioner participation, including that of physicians, in a medical facility’s quality assurance program by encouraging practitioners to openly discuss opportunities for improvement in practitioner practice without fear of punitive action. VA officials stated that quality assurance information if used outside of a facility’s quality assurance program could be available for other purposes, including litigation. However, VA has not provided guidance on how facility officials can obtain such information in accordance with VA policy—that is, outside of a quality assurance program. Officials at six medical facilities told us that they used performance information to support the granting of clinical privileges requested by their physicians, but collected all or most of this information through facility quality assurance programs. At the seventh medical facility, officials did not use individual physician performance information to renew physicians’ clinical privileges, as required by VA. Not All Facilities Submitted Paid Malpractice Claim Information in a Timely Manner We also included in our review a requirement that is related to the privileging process—medical facilities must submit to VA’s Office of Medical-Legal Affairs information on paid VA medical malpractice claims within 60 days after being notified that a claim was paid. VA’s Office of Medical-Legal Affairs is responsible for forming panels of practitioners to determine whether practitioners involved in any of these claims delivered substandard care to veterans and provides these determinations to facility officials. We found that three of the seven VA medical facilities we reviewed did not submit claim information to VA’s Office of Medical-Legal Affairs within the 60-day time frame. For example, for one facility we visited, we found that from 2001 through 2005, information on 21 of the facility’s 26 paid medical malpractice claims had not been submitted within the 60-day time frame to VA’s Office of Medical-Legal Affairs. Moreover, on average this medical facility took 30 months to submit information to VA’s Office of Medical-Legal Affairs, whereas the other two facilities averaged about 5 months to submit information. When VA medical facilities do not submit all relevant claim information to the Office of Medical-Legal Affairs, determinations on substandard care are not available to facility officials when they make privileging decisions. In addition, substandard care determinations are required to be reported by facility officials to NPDB. When VA medical facilities do not send claim information in a timely manner to the Office of Medical-Legal Affairs, these cases, if substandard care is found, go unreported or reporting to NPDB is delayed. This prevents other VA and non-VA facilities where the physician may also practice from having complete information on the physician’s malpractice history. VA Has Not Established Internal Controls to Help Ensure the Accuracy of Facilities’ Privileging Information VA has not required its medical facilities to establish internal controls to help ensure that privileging information managed by medical staff specialists is accurate. One facility we visited did not identify 106 physicians whose privileging processes had not been completed by facility officials for at least 2 years because of inaccurate information provided by the facility’s medical staff specialist. According to facility officials, the medical staff specialist changed reappointment dates for some physicians and for other physicians removed their names from VetPro, the facility’s credentialing database. As a result, these physicians were practicing at the facility without current clinical privileges. Once medical facility officials became aware of the problem, they reviewed the files of all physicians and identified 106 physicians for whom the privileging process had not been completed. Facility officials told us they did not find any problems that would have warranted the 106 physicians’ removal from the facility’s medical staff or that placed veterans at risk. This facility has since implemented internal controls to reduce the risk of a similar situation occurring in the future. During our site visits to other VA medical facilities for the physicians’ credentialing and privileging report, we did not identify any facilities that had established internal controls to help ensure the accuracy of the information they use to renew physicians’ clinical privileges. Without accurate information, VA medical facility officials will not know if they have failed to renew clinical privileges for any of their physicians. Concluding Observations VA’s employment screening requirements are intended to ensure the safety of veterans receiving care by identifying practitioners who are incompetent or may intentionally harm veterans. In our practitioner screening report that we are releasing today, we continue to raise concerns about gaps in VA’s employment screening requirements. Although VA concurred with our March 2004 recommendations and took steps to implement them, none were fully implemented as of March 2006. These recommendations should be fully implemented. We are also concerned that compliance with employment screening requirements for practitioners, including physicians, nurses, and pharmacists, among others, continues to be poor at the facilities we visited. Continuing gaps in VA’s employment screening requirements and mixed compliance with these requirements continue to place veterans at risk. The other report that we are releasing today demonstrates that medical facilities we reviewed largely complied with VA’s physician credentialing and privileging requirements. However, we identified problems with the appropriate use of physician performance information in the privileging process and the timely submission of medical malpractice information to VA’s Office of Medical-Legal Affairs. Additionally, VA’s lack of internal controls for its facilities to ensure the accuracy of physician privileging information raises concerns that VA is at risk for allowing physicians to practice with expired clinical privileges. Our reports include the following four recommendations that VA should implement to help ensure patient safety: expand the human resource management oversight program to include a review of VA facilities’ compliance with employment screening requirements for all types of practitioners, provide guidance to medical facilities on how to collect individual physician performance information in accordance with VA’s credentialing and privileging requirements to use in medical facilities’ privileging processes, enforce the requirement that medical facilities submit information on paid VA medical malpractice claims to VA’s Office of Medical-Legal Affairs within 60 days after being notified that the claim is paid, and instruct medical facilities to establish internal controls to ensure the accuracy of their physician privileging information. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or other members of the subcommittee may have. Contacts and Acknowledgments For further information regarding this testimony, please contact Laurie E. Ekstrand at (202) 512-7101 or ekstrandl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Mary Ann Curran, Martha Fisher, Krister Friday, and Marcia Mann also contributed to this statement. Appendix I: March 2004 Report Recommendations and VA Screening, Credentialing, and Privileging Requirements In our March 2004 report, VA Health Care: Improved Screening of Practitioners Would Reduce Risk to Veterans, we made four recommendations to address the gaps we identified in VA’s employment screening requirements and the noncompliance we found at the four medical facilities we visited. March 2004 Report Recommendations Expand verification of all state licenses and national certificates by contacting the appropriate licensing boards and national certifying organizations for all Department of Veterans Affairs’ (VA) health care practitioners. Expand query of the Healthcare Integrity and Protection Data Bank (HIPDB)—a national data bank that contains information on health care practitioners involved in health care-related civil judgments and criminal convictions or who have had disciplinary actions taken against their licenses or national certificates—to include all licensed health care practitioners at VA facilities. Conduct fingerprint-only background investigations for all VA health care practitioners with direct patient care access. Conduct oversight of medical facilities to ensure compliance with all of VA’s key screening requirements. VA Employment Screening Requirements for Practitioners Selected for Review To measure facility compliance with VA’s employment screening requirements, we selected five requirements for our review. We selected two of the five requirements because in our March 2004 report we found that VA facilities had problems complying with these two long-standing requirements. We selected two other requirements because VA implemented these since March 2004 to improve its employment screening of practitioners. The remaining requirement is long-standing, but is related to the performance of background investigations, which was a requirement we reviewed and found compliance with this requirement to be problematic in 2004. Complete VA Form 2280, which medical facility officials must do in order to determine the appropriate type of background investigation needed for each health care practitioner job category. Perform a background investigation. Query HIPDB. Complete an employment checklist, which VA officials are to use to document the completion of VA screening requirements for those practitioners VA intends to hire. Verify the status of state licenses and national certificates. VA Physician Credentialing Requirements Selected for Review We selected four of VA’s credentialing requirements for review because they are requirements that—unlike other credentialing requirements— address information about physicians that can change or be updated with new information periodically. Verify that all state medical licenses held by physicians are valid. Query the Federation of State Medical Boards database to determine whether physicians had disciplinary action taken against any of their licenses, including expired licenses. Verify information provided by physicians on their involvement in medical malpractice claims at VA or non-VA facilities. Query the National Practitioner Data Bank to determine whether a physician was reported to this data bank because of involvement in VA or non-VA paid medical malpractice claims, display of professional incompetence, or engagement in professional misconduct. VA Physician Privileging Requirements Selected for Review We selected four privileging requirements that VA identifies as general privileging requirements. In addition to the four general privileging requirements, we selected another privileging requirement because of its importance in the renewal of clinical privileges because it provides clinical service chiefs with information on the quality of care delivered by individual physicians. Verify that all state medical licenses held by physicians are valid. Verify physicians’ training and experience. Assess physicians’ clinical competence and health status. Consider any information provided by physicians related to medical malpractice allegations or paid claims, loss of medical staff membership, loss or reduction of clinical privileges at VA or non-VA facilities, or any challenges to physicians’ state medical licenses. Use information on physicians’ performances when making decisions about whether to renew physicians’ clinical privileges. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In its March 2004 report, "VA Health Care: Improved Screening of Practitioners Would Reduce Risk to Veterans," GAO-04-566 , GAO made recommendations to improve VA's employment screening of practitioners. GAO was asked to testify today on steps VA has taken to improve its employment screening requirements and VA's physician credentialing and privileging processes because of their importance to patient safety. This testimony is based on two GAO reports released today that determined the extent to which (1) VA has taken steps to improve employment screening for practitioners by implementing GAO's 2004 recommendations, (2) VA facilities are in compliance with selected credentialing and privileging requirements for physicians, and (3) VA has internal controls to help ensure the accuracy of privileging information. In its report released today, "VA Health Care: Steps Taken to Improve Practitioner Screening, but Facility Compliance with Screening Requirements Is Poor," GAO-06-544 , GAO found that VA has taken steps to improve employment screening for practitioners, such as physicians, nurses, and pharmacists, by partially implementing each of four recommendations GAO made in March 2004. However, gaps still remain in VA's requirements. For example, for the recommendation that VA check all state licenses and national certificates held by all practitioners, such as nurses and pharmacists, VA implemented the recommendation for practitioners it intends to hire, but has not expanded this screening requirement to include those currently employed by VA. In addition, VA's implementation of another recommendation--to conduct oversight to help facilities comply with employment screening requirements--did not include all screening requirements, as recommended by GAO. In another report released today, "VA Health Care: Selected Credentialing Requirements at Seven Medical Facilities Met, but an Aspect of Privileging Process Needs Improvement," GAO-06-648 , GAO found at seven VA facilities it visited compliance with almost all selected credentialing and privileging requirements for physicians. Credentialing is verifying that a physician's credentials are valid. Privileging is determining which health care services--clinical privileges--a physician is allowed to provide. Clinical privileges must be renewed at least every 2 years. One privileging requirement--to use information on a physician's performance in making privileging decisions--was problematic because officials used performance information when renewing clinical privileges, but collected all or most of this information through their facility's quality assurance program. This is prohibited under VA policy. Further, three of the seven facilities did not submit medical malpractice claim information to VA's Office of Medical-Legal Affairs within 60 days after being notified that a claim was paid, as required by VA. This office uses such information to determine whether VA practitioners have delivered substandard care and provides these determinations to facility officials. When VA medical facilities do not submit all relevant information in a timely manner, facility officials make privileging decisions without the advantage of such determinations. VA has not required its facilities to establish internal controls to help ensure that physician privileging information managed by medical staff specialists--employees who are responsible for obtaining and verifying information used in credentialing and privileging--is accurate. One facility GAO visited did not identify 106 physicians whose privileging processes had not been completed by facility officials for at least 2 years because of inaccurate information provided by the facility's medical staff specialist. As a result, these physicians were practicing at the facility without current clinical privileges.
Background NASA’s Commercial Crew Program is a multi-phased effort that began in 2010. Across the five phases, NASA has engaged several companies using both agreements and contract vehicles to develop and demonstrate crew transportation capabilities. As the program has passed through these phases, NASA has generally narrowed down the number of participants. The early phases of the program were under Space Act agreements, which is NASA’s other transaction authority. These types of agreements are generally not subject to the Federal Acquisition Regulation (FAR) and allow the government and its contractors greater flexibility in many areas. Under these Space Act agreements, NASA relied on the commercial companies to propose specifics related to their crew transportation systems, including their design, the capabilities they would provide, and the level of private investment. In these phases, NASA provided technical support and determined if the contractors met certain technical milestones. In most cases, NASA also provided funding. For the final two phases of the program, NASA awarded FAR-based contracts. By using FAR-based contracts, NASA gained the ability to levy specific requirements on the contractors and procure missions to the ISS, while continuing to provide technical expertise and funding to the contractors. Under the contracts, NASA will also evaluate whether contractors have met its requirements and certify their final systems for use. Appendix II contains a description of each of the Commercial Crew Program’s phases, a list of the participants, and the level of funding provided to each participant by NASA. Program Requirements Before a company’s crew transportation system can be certified by NASA, it must meet two sets of requirements. The ISS program levies a set of 332 requirements that must be met by all visiting spacecraft, whether they are carrying cargo or crew to the station. There are three major areas outlined in the ISS requirements document: 1) interface requirements for both the ISS and the spacecraft; 2) performance requirements for ground systems supporting the spacecraft; and 3) design requirements for spacecraft to ensure safe integration with the ISS. The second set of requirements is levied by the Commercial Crew Program. These include 280 high-level requirements related to the design, manufacturing, testing, qualification, production, and operation of crew transportation systems that deliver NASA astronauts to the ISS. Contractors are responsible for developing the lower-level system specifications and system and subsystem designs that implement those requirements. For example, there is a Commercial Crew Program requirement for the contractor’s crew transportation system to provide a continuous autonomous launch abort capability, in which the spacecraft self-initiates the abort procedure independent from the supporting ground systems, from lift-off through orbital insertion with a 95 percent probability of success. The requirement does not specify how elements of the crew transportation system, such as its sensors, software, propulsion, or capsule, should be designed to fulfill it. Current Program Contracts In September 2014, NASA awarded firm-fixed-price contracts to Boeing and SpaceX, valued at up to $4.2 billion and $2.6 billion, respectively, for the Commercial Crew Transportation Capability phase. Under a firm- fixed-price contract, the contractor must perform a specified amount of work for the price negotiated by the contractor and government. This is in contrast to a cost-reimbursement contract, in which the government agrees to pay the contractor’s reasonable costs regardless of whether work is completed. During this phase, the contractors will complete development of crew transportation systems that meet NASA requirements, provide NASA with the evidence it needs to certify that those systems meet its requirements, and fly initial crewed missions to the ISS. Under the contracts, NASA and the companies originally planned to complete the certification review for each system by 2017. Figure 1 shows the spacecraft and launch vehicles for Boeing and SpaceX’s crew transportation systems. The Commercial Crew Transportation Capability phase contracts include three types of services: Contract Line Item 001 encompasses the firm-fixed-price design, development, test, and evaluation work needed to support NASA’s final certification of the contractor’s spacecraft, launch vehicle, and ground support systems. Contract Line Item 002 covers any service missions that NASA orders to transport astronauts to and from the ISS. Under this indefinite-delivery, indefinite-quantity line item, NASA must order at least two missions from each contractor, which it has done, and can order up to six missions. Each service mission is its own firm-fixed- price task order. NASA must certify the contractors’ systems before they can fly these missions. Contract Line Item 003 is an indefinite-delivery, indefinite-quantity line item for any special studies, tests, and analyses that NASA may request. These tasks do not include any work necessary to accomplish the requirements under contract line item 001 and 002. As of September 2016, NASA had issued one order under this contract line item to Boeing—an approximately $180,000 study of the spacecraft’s seat incline. The maximum value of this contract line item is $150 million. NASA divided the certification work under contract line item 001 into two acceptance events: the ISS design certification review and the certification review. An acceptance event occurs when NASA approves a contractor’s designs and acknowledges that the contractor’s work is complete and meets the requirements of the contract. The ISS design certification review verifies the contractor’s crew transportation system’s capability to safely approach, dock, mate, and depart from the ISS, among other requirements. After the contractor has successfully completed all its flight tests, as well as various other activities, the certification review determines whether the crew transportation system meets the Commercial Crew Program’s requirements. The contractors must complete both acceptance events to receive NASA certification. NASA and the contractors also identified discrete performance-based events, called interim milestones, which occur as the contractors progress towards the two acceptance events. Each interim milestone has pre- determined entrance and exit criteria that establish the work that must be completed in order for the contractor to receive payment. The interim milestones serve several functions, allowing the government to finance work from development to completion, review the contractors’ progress, and provide approval to proceed with key demonstrations and tests. These milestones are also used by the program to inform its annual budget request. Since the contracts were awarded, the Commercial Crew Program and the contractors have agreed to split several of the interim milestones. The contractors have also added new milestones, in part to capture changes in their development plans. NASA has also made changes to the contracts that have increased their value. While the contracts are fixed-price, their values can increase if NASA adds to the scope of the work or otherwise changes requirements. As of October 2016, NASA had increased the value of contract line item 001 for Boeing by $47 million for hardware and software requirement changes, and contract line item 001 for SpaceX by $91 million for a hardware requirement change and the addition of cargo during an ISS test flight. Program Oversight NASA has tailored its management approach for the Commercial Crew Program because the contractors, not the government, are responsible for the design, development, test, and evaluation of their crew transportation systems and make their own decisions about when to build, integrate, and test hardware. For example, NASA policy outlines key decision point reviews, which allow NASA management to determine if the program is ready to progress to the next phase. NASA management relies on annual program reviews, instead of key decision point reviews, to provide oversight of the Commercial Crew Program. According to NASA management officials, they chose to do so because the contractors’ development approaches and schedules did not align with NASA’s typical key decision points. Other elements of the reviews are similar though. For example, NASA’s first annual review of the program included updates on overall program risk, an update on the program’s cost and schedule risks, and perspectives from the independent review board, much like a key decision point review. The firm-fixed price nature of the current contracts also led NASA to alter its typical management approach. For example, the NASA Associate Administrator told us that he decided not to require an agency baseline commitment—cost or schedule baseline—for the Commercial Crew Program, in part, because the firm-fixed-price contracts with each contractor essentially serve as the cost baselines for the program. NASA also provides regular updates to Congress on the Commercial Crew Program through a variety of reporting mechanisms. NASA includes information on all of its major projects, including Commercial Crew, in its annual budget submission, although the type and specificity of the information can differ. For example, table 1 shows the key differences in the types of information NASA provided on two human space flight projects—the Commercial Crew Program and the Orion Crew Vehicle development in its fiscal year 2017 budget request. In addition to its budget, NASA submits quarterly reports to the appropriations committees on the Commercial Crew Program’s progress that include cost and schedule updates from each contractor and information on contractor specific-risks, including procurement-sensitive material that is not publically releasable. Neither Contractor Expects to Achieve Certification before 2018 and NASA Has Not Yet Determined How It Will Ensure ISS Access in Case of Further Delays Since September 2014, both Boeing and SpaceX have made progress developing their crew transportation systems, but neither contractor will be able to meet their original 2017 certification dates and both expect certification to be delayed until 2018. As the contractors have fallen behind, the time between key test events has decreased, which reduces the time the contractors have to learn and implement changes, increasing the likelihood of additional delays. The schedule pressures are amplified by NASA’s need to provide a viable crew transportation option to the ISS before its current contract with Russia’s space agency runs out in 2019. NASA has not yet developed a contingency plan to ensure an uninterrupted presence on the ISS should the Commercial Crew Program experience further delays, although the ISS program has begun to discuss potential options. Both Contractors Have Begun Manufacturing Spacecraft and Modifying Launch Facilities Both Boeing and SpaceX have made progress finalizing their designs and building hardware as they work towards final certification of their crew transportation systems. Each contractor’s system includes a spacecraft and a launch vehicle with supporting ground systems. Examples of the contractors’ development progress since September 2014 include the following: Boeing: In 2015, Boeing completed its critical design review, which determines whether a system’s design is stable enough to proceed with final design and fabrication, and began fabricating the first of its three planned spacecraft. This spacecraft will be tested to validate the design, including in a planned January 2018 pad abort test where the ability of the capsule’s abort system is tested from a static position on the launch pad. Further, in May 2016, Boeing shipped the structural test article for its service module to its testing facility and shipped the crew module in January 2017. These modules will be used to conduct tests that simulate different operating environments, such as the vibrations during a launch ascent, on key spacecraft components. Boeing and its launch vehicle provider, United Launch Alliance (ULA), have also made significant modifications to its launch facilities at the Cape Canaveral Air Force Station, including installing a crew access tower and crew access arm, which the crew will use to board the crew capsule prior to launch. SpaceX: SpaceX completed its critical design review in 2016 and has begun assembling and integrating components of the two spacecraft it will use for its uncrewed and crewed flight tests, which are scheduled for November 2017 and May 2018, respectively. SpaceX officials told us they completed integrated testing and qualification of several major spacecraft components, including its environmental control and life support system—which controls air quality and temperature, among other functions, to ensure crew survivability—and its spacesuit in November 2016. SpaceX also made significant modifications to its launch facilities at the Kennedy Space Center, including constructing a new hangar and an upgraded flame trench, which is critical to safely contain the plume exhaust from rocket launches, and substantially upgrading the existing crew access tower, which was last used during the Space Shuttle era. Both Contractors Have Delayed Certification and Reduced Time between Key Test Events Neither Boeing nor SpaceX will be able to meet their original 2017 certification dates and both now expect certification to be delayed until 2018. Since the award of the current Commercial Crew contracts, the program, Boeing, and SpaceX have all identified the contractors’ delivery schedules as aggressive. Program officials told us that, from the outset, they knew delays were likely due to the development nature of the program. Multiple independent review bodies—including the program’s standing review board, the Aerospace Safety Advisory Panel, and the NASA Advisory Council-Human Exploration and Operations committee— also noted the aggressiveness of the contractors’ schedules as they move toward certification. Both contractors have notified NASA that they would not be able to meet the 2017 certification dates originally established in their contracts. These notices required the parties to renegotiate their contracts to reflect the contractors’ delays. As of October 2016, both contractors have submitted, and NASA has agreed to, updated schedules that reflect delays to their final certification review dates—Boeing by 5 months, from August 2017 until January 2018, and SpaceX by 6 months, from April 2017 until October 2017. Boeing and SpaceX’s internal schedules both show additional certification delays that, as of November 2016, were not yet reflected in their contracts. As the contractors have made changes to their development schedules, they have also reduced the number of months between critical test events leading up to final certification, which reduces the time the contractors have to learn, make any needed design changes, and implement those changes, increasing the likelihood of additional delays. Figures 2 and 3 below show the total proposed certification delay and current proposed schedule for each contractor. Since Boeing and the Commercial Crew Program agreed to move Boeing’s certification review from August 2017 to January 2018, Boeing has proposed moving the review out by at least 9 additional months to the fourth quarter of 2018. The current proposed schedule is the fourth significant schedule change by Boeing since the contract was awarded. The most recent schedule update includes delays related to a manufacturing error. In September 2016, a subcontractor damaged a major component of Boeing’s second spacecraft during machining to trim mass from the spacecraft. This is expected to delay delivery of the component by 2-3 months, which is reflected in the current proposed schedule. The second spacecraft is important because, according to Boeing’s program schedule, it will be used to support a significant portion of Boeing’s planned testing as well as its crewed flight test. Boeing also reduced the time between some key testing events that will be used to provide data necessary for Boeing to demonstrate its ability to meet NASA’s requirements. In the original schedule, Boeing allocated 3 months between its uncrewed flight test and its crewed flight test, but under its current proposed schedule, there are only 2 months between these critical test events. Although these events are not formal milestones in Boeing’s contract with NASA, they are critical learning events, and we have previously found that reducing the time between key test events limits time for the contractor to learn and adapt to any changes that may be required. Boeing is also tracking a risk that the schedule could be further delayed because there is little time between test events. Since SpaceX and the Commercial Crew Program agreed to move its certification review from April 2017 to October 2017, SpaceX has proposed moving the review out by at least 9 additional months to the third quarter of 2018. SpaceX officials stated that these delays are largely driven by development challenges, changes in NASA requirements, and implementation of corrective actions stemming from a launch vehicle mishap in 2016. On September 1, 2016, SpaceX experienced an anomaly during a standard, pre-launch static fire test of its Falcon 9 launch vehicle that resulted in an explosion and the loss of the vehicle. The mishap investigation is complete and SpaceX returned to flight in January 2017 with a commercial launch. However, additional schedule changes are possible. SpaceX officials told us that they continue to assess the effect of the mishap on their Commercial Crew schedule. Further, as SpaceX has made schedule changes, it has also reduced the time between its uncrewed flight test and crewed flight test from 7 months to 6 months. The Commercial Crew Program is tracking risks that both contractors could experience additional schedule delays and its own analysis indicates that certification is likely to slip into 2019. One of the program’s top six programmatic risks for each contractor is the likelihood of additional schedule delays. Each month, the program updates its schedule risk analysis, based on the contractors’ internal schedules as well as the program’s perspectives and insight into specific technical risks. As of October 2016, the program’s schedule risk analysis indicated that both contractors’ certification dates would likely slip into early 2019. The mounting schedule pressure on the Commercial Crew Program from the contractors’ delays is amplified by NASA’s need to provide a viable crew transportation option to the ISS before its current contract with Roscosmos, the Russian Federal Space Agency, runs out. The United States has spent tens of billions of dollars to develop, assemble, and operate the ISS over the past two decades, and NASA relies on uninterrupted crew access to help maintain and operate the station itself and conduct the research required to enable human exploration in deep space and eventually Mars, among other science and research goals. In 2015, the United States modified its contract with Roscosmos to provide crew transportation to the ISS for six astronauts through 2018 with rescue and return through late spring 2019. The contract extension was valued at $491 million or approximately $82 million per seat. NASA’s contract with Roscosmos permits it to delay the use of the final seat by up to 6 months to late spring 2019, with a return flight approximately 6 months later. NASA has not yet developed a contingency plan to ensure an uninterrupted presence on the ISS should the Commercial Crew Program experience further delays. The ISS program office stated there are other options it could consider to ensure uninterrupted access to the space station if neither contractor can provide crew transportation capabilities by the end of 2018; however, it may already be too late to pursue one of these options. If NASA determined it needed to purchase additional Soyuz seats, the process for contracting for those seats typically takes 3 years. In order to avoid a potential crew transportation gap in 2019, the contracting process would have needed to start in early 2016. In September 2016, senior NASA officials told us that they do not currently plan to purchase additional seats from Russia. The ISS program office stated that NASA could construct another “year in space” mission, similar to the mission undertaken by NASA astronaut Scott Kelly. According to the ISS program office, NASA and Roscosmos are discussing this option; however, there is no agreement in place. Under this scenario, an astronaut could begin a mission near the end of calendar year 2018 using one of the final Soyuz seats and would return to Earth via one of the Commercial Crew Program contractors’ crew transportation systems approximately one year later in 2019. Finally, the NASA officials reported that NASA could consider negotiating the acquisition of “stand by” Soyuz seats, but the availability of those seats is dependent on Russia’s plans for staffing the station. If NASA does not develop a viable contingency plan for ensuring access to the ISS in the event of further Commercial Crew delays, it risks not being able to maximize the return on its multibillion dollar investment in the space station. Programmatic and Safety Risks Also Pose Challenges In addition to Boeing and SpaceX’s schedule challenges, both contractors face other risks that will need to be addressed to support their certification. This includes the contractors’ ability to meet the agency’s requirements related to the safety of their systems. These risks are not unusual; there are inherent technical, design, and integration risks in all NASA’s major acquisitions, as these projects are highly complex and specialized and often push the state of the art in space technology. The Commercial Crew Program monitors risks through two lenses: programmatic risks potentially affect the program’s cost and schedule or the performance of the crew transportation system, and safety risks could elevate the potential for the loss of crew. The contractors maintain their own risk management systems and do not always view their risks in the same way as the program. Program’s Top Risks for Boeing The Commercial Crew Program’s top programmatic and safety risks for Boeing are, in part, related to having adequate information on certain systems to support certification. For example, the Commercial Crew Program is tracking a risk about having the data it needs to certify Boeing’s launch vehicle, ULA’s Atlas V, for manned spaceflight. The Atlas V’s first stage is powered by the Russian-built ULA-procured RD-180 engine, which has previously been certified to launch national security and science spacecraft but not humans. ULA and Commercial Crew Program officials have been working to get access to data about the engine design, so that they can verify and validate that it meets the program’s human certification requirements. The program and Boeing report that access to the data is highly restricted by agreements between the U.S. and Russian governments. As an alternative, the program has stated that it is considering whether to certify the engine based on available data, but program officials believe doing so would be a high risk for the program. Boeing officials told us that they do not view this as a safety risk because NASA will not certify the engines without reviewing the data it needs. The program is also tracking a risk about having adequate information on the parachute system. In March 2016, Boeing modified its previously approved parachute test plan by replacing six drop tests, which simulate select forces—for example, mass—on the parachute system for one full- scale test event, which simulates all aspects of a parachute system. Through discussions with the program, Boeing has increased the number of full-scale test events to five, with an option for two additional tests if deemed necessary. The program is in the process of reviewing the new test plan to determine if it will generate enough data for the program to evaluate the system. Regardless of whether the program approves Boeing’s new parachute test plan, program officials told us that they plan to gather additional data on the performance and reliability of both contractors’ parachute systems. NASA has several contractual options available to mitigate this risk, if needed. For example, NASA could choose to add additional analyses or parachute tests to the contract. Program’s Top Risks for SpaceX The Commercial Crew Program’s top programmatic and safety risks for SpaceX are, in part, related to ongoing launch vehicle design and development efforts. Prior to SpaceX’s September 2016 loss of a Falcon 9 during pre-launch operations, the program was tracking several risks related to SpaceX’s launch vehicle. SpaceX has identified five major block upgrades to its Falcon 9 launch vehicle. SpaceX officials told us that they have flown the first three block upgrades and are on track to implement the fourth and fifth block upgrades in 2017. Among other things, the updated design includes upgrades to the engines and avionics. The program is tracking a risk that there may not be enough time for SpaceX to implement these changes and get them approved prior to the first uncrewed flight test in November 2017. This test flight is a key activity to demonstrate how SpaceX’s system meets the program’s requirements. SpaceX needs to have a stable design to support certification. In addition to planned design changes, there could be unplanned design changes for the Falcon 9. During qualification testing in 2015, SpaceX identified cracks in the turbines of its engine. Additional cracks were later identified. Program officials told us that they have informed SpaceX that the cracks are an unacceptable risk for human spaceflight. SpaceX officials told us that they are working closely with NASA to eliminate these cracks in order to meet NASA’s stringent targets for human rating. Specifically, SpaceX has made design changes that, according to its officials, did not result in any cracking during initial life testing. Finally, both the program and a NASA advisory group consider SpaceX’s plan to fuel the launch vehicle after the astronauts are on board the spacecraft to be a potential safety risk. SpaceX’s perspective is that this operation may be a lower risk to the crew; NASA and SpaceX’s risk evaluation is ongoing. NASA and SpaceX may also need to re-examine SpaceX’s safety controls related to the fueling process if the investigation of the September 2016 Falcon 9 mishap identifies issues with the fueling of the vehicle. At the time of our review, SpaceX also had other elements in its design that had not yet been completed and reviewed. SpaceX requested, and the program approved, proposals to split its critical design review into three reviews because portions of its design had not been ready at previous reviews. The critical design review is the time in a project’s life cycle when the integrity of the product’s design and its ability to meet mission requirements are assessed, and it is important that a project’s design is stable enough to warrant continuation with design and fabrication. A stable design can minimize changes prior to fabrication, which can help avoid costly re-engineering and rework effort due to design changes. SpaceX’s final planned design review was held in August 2016; however, the program reported that a number of outstanding areas, primarily related to ground systems, still needed to be reviewed. SpaceX officials told us these areas were reviewed in November 2016. Further, according to SpaceX, these separate reviews were in order to perform review of designs that were completed earlier than anticipated, to allow SpaceX and NASA teams to focus in greater detail on certain systems, and to accommodate design updates driven in part by changes to NASA requirements. Program and Its Contractors Could Have Difficulty Meeting Safety Requirements The Commercial Crew Program has identified the ability of it and its contractors to meet crew safety requirements as one of its top risks. NASA established the “loss of crew” metric as a way to measure the safety of a crew transportation system. The metric captures the probability of death or permanent disability to one or more crew members. Under each contract, the current loss of crew requirement is 1 in 270, meaning that the contractors’ systems must carry no more than a 1 in 270 probability of incurring loss of crew. Near the end of the Space Shuttle program, the probability of loss of crew was approximately 1 in 90. Program officials told us that Commercial Crew is the first NASA program that the agency will evaluate against a probabilistic loss of crew requirement. They said that if the contractors cannot meet the agency loss of crew requirement at 1 in 270, NASA could still certify their systems by employing operational alternatives. This would entail a potentially increased level of risk or uncertainty related to the level of risk for the crew. Program officials told us their main focus is to work with the contractors to ensure that the spacecraft designs are robust from a safety perspective. The loss of crew metric and the associated models used to measure it are tools that help achieve that goal. For example, Boeing told us that in early 2016, it needed to identify ways to reduce the mass of its spacecraft. As Boeing found opportunities to reduce the spacecraft mass, the program stated that it also had to consider how implementing those design changes would affect its loss of crew analysis. According to the program, it is working with both contractors to address the factors that drive loss of crew risk through design changes or additional testing to gain more information on the performance and reliability of systems. The Commercial Crew Program is tracking three main crew safety risks. First, the contractors’ computer models may not accurately predict the loss of crew. These models are a weighted treatment of scenarios, likelihoods, and consequences throughout the flight, and are continually being updated by the contractors. According to program officials, they have been working closely with the contractors to improve their loss of crew models. For example, the program identified risk factors, such as bird strikes, that were not included in the contractors’ models and worked to update them. Both contractors told us they have confidence in their models to accurately predict the loss of crew risk associated with their spacecraft. Second, the contractors’ spacecraft may not be able to tolerate the micrometeoroid and orbital debris environment, which is the most significant driver of the loss of crew metric, according to the program’s analysis. Both contractors have lowered this risk through testing, which provides insight into how well their systems perform in these environments, and by making design changes. If the contractors have to make future design changes to improve their spacecraft’s performance in the debris environment, certification could be delayed significantly. Finally, if the contractors cannot meet the loss of crew requirement, there are several actions the Commercial Crew Program could take to help meet it; but, according to the program, these actions may not be enough to completely close the gap. The program has reported it is exploring options, such as the use of ISS cameras to conduct on-orbit inspections of the spacecraft. Crew Program Has Visibility into Contractor Progress, but Maintaining Its Current Level of Visibility through Certification Could Add to Schedule Pressures The Commercial Crew Program is using contractually-defined mechanisms to gain a high level of visibility into the contractors’ crew transportation systems to achieve its goal of obtaining safe, reliable, and cost-effective access to and from the ISS. The program has developed productive working relationships with both contractors, but the level of visibility that the program has required thus far to assess the contractors’ systems and ensure their safety has taken more time than the program or contractors anticipated. The early upfront investment in time may ultimately make the certification process go smoother, but the program office could face difficult choices as the program progresses about how to maintain the level of visibility into contractor efforts it feels it needs without adding to the program’s schedule pressures. Further, the program faces potential workload challenges as it works to complete upcoming oversight activities, while completing others that were already behind schedule. Crew Program Uses Contractually-Defined Mechanisms to Gain Visibility into Contractor Progress The Commercial Crew Program included mechanisms in its firm-fixed- price contracts that it believed would enable it to gain the visibility into the contractors’ technical efforts it needs to support a final certification decision. Contracting officials told us that the program included these mechanisms because it was concerned that the typical fixed-price inspection clauses may not give the program the level of visibility it felt it would need in order to certify the contractors’ systems for human spaceflight. Using a firm-fixed-price contract for a human spaceflight development program represents a different way of doing business for NASA, which has typically used cost-reimbursement contracts for these efforts due to their complexity and the risks involved. Cost-reimbursement contracts require the government to maintain extensive visibility into a contractor’s technical progress and financial performance. The visibility that NASA receives under its firm-fixed-price contracts with Boeing and SpaceX is similar to that of cost-reimbursement type contracts, with the exception that NASA does not receive any of the contractors’ business- related cost and performance data such as earned value management. In order to gain the needed visibility into the contractors’ efforts, the Commercial Crew Program used a mix of standard and tailored contract clauses. The program began by leveraging NASA’s Federal Acquisition Regulation Supplement, which contemplates NASA conducting inspections and other quality assurance requirements through “insight” and “oversight.” In the supplement, insight is defined as the monitoring of contractor quality data and government-identified metrics and contract milestones, and any review of contractor work procedures and records; and oversight is defined as the government’s right to concur or non-concur with the contractor’s decisions affecting product conformity, and non- concurrence must be resolved before the contractor can proceed. The Commercial Crew contracting officer explained that the program built upon what is in the NASA Federal Acquisition Regulation Supplement by including an additional “insight clause” in the contracts to set clear expectations that the program intended to obtain extensive information about how the contractors were meeting contract requirements. For example, the Commercial Crew contracting officer told us that one such expectation is that the contractors will provide the program access to virtually all data produced under or relevant to the contract, including subcontractor data. The insight clause includes three mechanisms for the program to gain visibility into what progress the contractors are making and how the contractors are performing work. The clause also requires each contractor to develop an insight implementation plan for how it will provide access and data to NASA and facilitate the program’s visibility into its crew transportation systems. Table 2 describes the three mechanisms and provides examples of how NASA has used them to gain visibility into the contractors’ progress. The visibility that the Commercial Crew Program gains through the use of these contract mechanisms is designed to assist in its oversight and final certification of the contractors’ crew transportation systems, as shown in figure 4. The Commercial Crew Program and its contractors have made progress working together to ensure the program obtains the level of visibility that it feels it needs to achieve certification. All three organizations said that their relationships and communication have improved over the course of the contract, even as they have addressed difficult issues. For example, program officials said that Boeing is doing a much better job of communicating NASA’s safety requirements to ULA than it did during previous contract phases, and that ULA has made great strides embracing a crew safety culture as demonstrated through its efforts to understand the program’s crewed flight requirements. We also heard a strong consensus across several independent bodies and the contractors that the Commercial Crew program manager is providing critical leadership to the program to help the NASA workforce operate effectively in the firm-fixed-price contract environment. For example, Aerospace Safety Advisory Panel officials told us that, while they originally had concerns about how the firm-fixed-price environment would work for a human spaceflight development, they are confident in the progress of the program and its future success because of the program manager’s leadership and transparency. Sustaining Program’s Level of Visibility Might Be Difficult as Schedule Pressure Builds The Commercial Crew Program has developed productive working relationships with both contractors, but the level of visibility that the program has required thus far has also taken more time than the program or contractors anticipated. The program’s standing review board has stated that the contract is structured to allow NASA unprecedented levels of visibility, but that it was intended to be used primarily for high-risk areas. However, the standing review board found, and both contractors told us, that the program has requested high levels of visibility on most items and there are signs that the contractors’ patience is waning. Both contractors expressed concerns that the program requests more interaction and data than they originally anticipated at the time of the contract award. For example, Boeing and SpaceX officials told us that the program often requests additional in-person engagement with their engineers, such as repeat presentations to multiple boards on the same technical issue, and has also asked for the same data in multiple formats or from multiple stakeholders. Program officials told us that they are constantly working to find a balance between obtaining the visibility they need to be able to eventually certify the crew transportation systems for human spaceflight while giving the contractors room to independently work through issues for their systems. As the Commercial Crew Program progresses, the program office could also face difficult choices about how to maintain the level of visibility into contractor efforts it feels it needs without adding to the program’s schedule pressures. Independent review bodies, including the standing review board, the Aerospace Safety Advisory Panel, and the NASA Advisory Council-Human Exploration and Operations committee, expressed concern that the program may not have the capacity to sustain the level of visibility it has had to date and still meet the current certification schedule. The early upfront investment in time may ultimately make the certification process go smoother, but finding the right balance of visibility, and recognizing that a high level of visibility takes time and may impact the schedule, will be especially important as the contractors approach final certification when the government will need to determine if the systems are safe enough for human spaceflight. Program Office Workload Is an Emerging Schedule Risk Program officials told us that one of their greatest upcoming challenges will be to keep pace with the contractors’ schedules so that the program does not delay certification. Specifically, they told us they are concerned about an upcoming “bow wave” of work because the program must complete two oversight activities—phased safety reviews and verification closure notices—concurrently in order to support the contractors’ ISS design certification reviews, uncrewed and crewed flight test missions, and final certification. The Commercial Crew Program is working to complete its three-phased safety review, which will ensure that the contractors have identified all safety-critical hazards and implemented associated controls, but it is behind schedule. Both the contractors and the program have contributed to these delays. In phase one, Boeing and SpaceX identified risks in their designs and developed reports on potential hazards, the controls they put in place to mitigate them, and explanations for how the controls will mitigate the hazards. In phase two, which is ongoing, the program reviews and approves the contractors’ hazard reports, and develops strategies to verify and validate that the controls are effective. In phase three, the contractors will conduct the verification activities and incrementally close the reports. The Commercial Crew Program’s review and approval of the contractors’ hazard reports have taken longer than planned. The program originally planned to complete phase two in early 2016 but currently does not expect to complete this phase until June 2017. The Commercial Crew Program has a goal of reviewing hazard reports within 8 weeks of receiving them, but a recent report by the NASA Office of Inspector General found that the reviews are taking longer than anticipated and a backlog has developed. In response to the Inspector General’s report, NASA officials noted that, while the timeliness of the hazard review process is important, what is more important is having thorough, detailed hazard reports in order to understand safety risks and ensuring the safety of each system. As of October 2016, the Commercial Crew Program had approved 117 of the anticipated 195 hazard reports and planned to approve approximately half of the remaining reports for both contractors by the end of 2016. Program officials told us that the hazard reports that are still open are related to items that they would not expect to be closed because they involve some of the more complicated design work that the contractors have not yet finalized. Program officials also pointed out other ways that the contractors have contributed to phase two delays, including receiving incomplete hazard reports that required several iterations before the program could begin its formal review. The Commercial Crew Program’s verification closure notice process, which is used to verify that the contractors have met all requirements, is one of the other key oversight activities and potential workload challenges for the program. The program is completing that process concurrently with the phased safety reviews. The verification closure process is initiated by the contractor when it provides the program with data and evidence to substantiate that it has met each requirement, and is completed when the program has reviewed and approved the contractor’s evidence to verify that each requirement has been met. A verification closure notice is required for each of the 332 ISS requirements, which are applicable to anyone who docks with the station, and the 280 Commercial Crew Program requirements. The Commercial Crew Program must also approve a subset of verification closure notices before key tests or milestones can occur. For example, the ISS requirements must be met before SpaceX and Boeing’s uncrewed flights to the ISS, which are currently planned for November 2017 and June 2018, respectively. The program’s ability to smooth its workload is limited because the contractors generally control their development schedules. Proposed changes to the Boeing and SpaceX schedules could help alleviate some of the concurrency between the program’s phased safety reviews and verification closure process. Conclusions NASA’s Commercial Crew Program is a multibillion dollar effort intended to end the United States’ reliance on Russia to maintain an uninterrupted presence on the ISS. To do so, NASA embarked on a different way of doing business. It awarded firm-fixed-price contracts to Boeing and SpaceX that transferred the financial risks to the contractors while giving them freedom to develop unique transportation systems that meet NASA’s standards. An independent review group initially raised concerns about how well this model would work for a human spaceflight program. Parts of the new business model have worked relatively well and NASA has had the visibility it needs into the technical details and risks of each system. Gaining this level of visibility has taken more time than the program or contractors anticipated, but the early upfront investment in time should ultimately make the certification process go smoother. In addition, while the Commercial Crew Program should be mindful about placing undue burdens on its contractors, it ultimately has the responsibility for ensuring the safety of U.S. astronauts, and its contracts with Boeing and SpaceX give it deference to determine the level of insight required to do so. When the current phase of the Commercial Crew Program began, there was widespread acknowledgment that the contractors’ development and certification schedules were aggressive, and the anticipated schedule risks have now materialized. Both contractors’ certification dates have been delayed into 2018, and the program’s analysis indicates that neither contractor is likely to be ready until 2019. NASA has purchased seats for U.S. astronauts on the Russian Soyuz vehicle, so that it can maintain an uninterrupted presence on the ISS. But those seats will run out in spring 2019, at the latest, and it generally takes 3 years for NASA and Russia to negotiate the purchase of additional seats. If delays on the Commercial Crew Program persist, having contingency plans, including options to expedite the purchase of additional Soyuz seats, will become increasingly important to ensure an uninterrupted U.S. presence on the ISS. Without a viable contingency plan that has been communicated to Congress, NASA puts at risk its ability to maximize the utility of the ISS and the return on the multibillion dollar investment it has made in the space station. Recommendation for Executive Action In order to ensure that the United States has continued access to the ISS if the Commercial Crew Program’s contractors experience additional schedule delays, we recommend the NASA Administrator develop a contingency plan for maintaining a presence on the ISS beyond 2018, including options to purchase additional Russian Soyuz seats, and report to Congress on the results. Agency Comments and Our Evaluation We provided a draft of this report to NASA for comment. In written comments, NASA agreed with our recommendation and intends to develop a contingency plan by mid-March 2017. These comments are reproduced in Appendix III. NASA also provided technical comments, which have been addressed in the report, as appropriate. We are sending copies of the report to the NASA Administrator and interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202-512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to assess (1) the extent to which the contractors have made progress towards certification and the potential effects of any certification delays on the National Aeronautics and Space Administration’s (NASA) access to the International Space Station (ISS); (2) the major programmatic and safety risks facing the program and the contractors; and (3) the extent to which the program has visibility into the contractors’ efforts. To assess the contractors’ progress towards certification, we obtained and reviewed program and contractor documents, including monthly and quarterly updates as well as monthly schedule summaries, from March 2016 through December 2016. We interviewed program and contractor officials to discuss the contractors’ recent progress as well as their upcoming events and any expected delays. To identify total delays to date, we compared original contract schedules to Boeing’s October 2016 working schedule and SpaceX’s November 2016 working schedule, which identify their most recent proposed delays to some milestones. To identify whether the contractors have introduced compression into their schedules, we judgmentally selected four key events and analyzed the movement of these key events relative to each other. Based on our review of program and contractor documents, we selected the contract’s acceptance events—the ISS design certification review and final certification review—as well as the uncrewed and crewed flight tests to conduct this analysis. We selected the acceptance events, as these occur when NASA approves a contractor’s designs and acknowledges that the contractor’s work is complete and meets the requirements of the contract. We selected the two flight tests for each contractor, as they are intended to test key system capabilities including the ability to launch, dock with the ISS, and return safely to Earth. Finally, to assess the potential effects of any certification delays on NASA’s access to ISS, we reviewed information from NASA’s budget, which includes the planned ISS launch manifest, and its contract with the Russian Federal Space Agency for transportation on the Soyuz vehicle. We also obtained information from the ISS program and the NASA Associate Administrator to determine if the agency had developed contingency plans to mitigate the effects of any certification delays on its access to the ISS. To assess the programmatic and safety risks for the Commercial Crew Program and its contractors, we obtained and reviewed monthly and quarterly reports, as well as the risks tracked in both the program’s and the contractors’ risk management systems, from March 2016 through December 2016. We interviewed program and contractor officials with knowledge of the technical risks to understand the risks and potential impacts and how they are planning to mitigate those risks. To assess how the program obtains visibility into its contractors’ systems and efforts, we reviewed the program’s contracts with its two contractors. In particular, we reviewed the contract “insight” clause, which outlines three mechanisms available to the program for obtaining visibility. We interviewed program officials in order to understand the three mechanisms and how they are used to gain visibility. We also interviewed officials from both contractors to gain their perspectives on the insight mechanisms. We also analyzed how NASA obtained visibility into contractor efforts at key events. We defined key events as those that are necessary for final certification. We selected three key events to examine for each contractor. We chose one historical event, so we could assess how the program had gained visibility over an entire event, and one upcoming event in both 2016 and 2017, in order to understand how the program obtains visibility in advance of an event. For each event, we obtained detailed information on how NASA obtained visibility and analyzed supporting documentation for these events to corroborate the information we obtained from NASA officials. In this report, we used these events as examples when describing how the program uses its three insight mechanisms and when describing the program’s relationships with its contractors. We also interviewed the program and contracting officials to obtain their perspectives on the level of visibility that the program has received into the contractors’ systems thus far. Finally, we analyzed program documentation and interviewed program officials to understand the level of work that NASA needs to complete leading up to certification. We assessed the timing of NASA’s reviews and spoke with program officials to determine if the program’s workload could be affected by the contractors’ schedules. For all three objectives, we met with officials from all three organizations that provide NASA with independent assessments of the program: the program’s standing review board, the NASA Advisory Council-Human Exploration and Operations subcommittee, and the Aerospace Safety Advisory Panel. We spoke with these three groups to gain their perspectives on the program’s oversight of each contractor’s technical risks and schedules as well as the program’s level of visibility into the contractors’ systems. We conducted this performance audit from March 2016 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Commercial Crew Program Phases and Participants The Commercial Crew Program is a multi-phased effort that began in 2010. Table 3 describes each phase, its purpose, the participants, and the level of funding provided by the National Aeronautics and Space Administration. Appendix III: Comments from the National Aeronautics and Space Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Molly Traci, Assistant Director; Ron Schwenn, Assistant Director; Carly Gerbig; Laura Greifner; Kurt Gurka; Michael Kaeser; Katherine Lenane; Roxanna T. Sun; and Kristin Van Wychen made significant contributions to this report.
Since the Space Shuttle was retired in 2011, the United States has been relying on Russia to transport astronauts to and from the ISS. The purpose of NASA's Commercial Crew Program is to facilitate the development of a domestic transport capability. In 2014, NASA awarded two firm-fixed-price contracts to Boeing and SpaceX with a combined total value up to $6.8 billion for the development of crew transportation systems that meet NASA requirements and initial missions to the ISS. The contractors were originally required to provide NASA all the evidence it needed to certify that their systems met its requirements by 2017. A house report accompanying H.R. 2578 included a provision for GAO to review the progress of NASA's human exploration programs. This report examines the Commercial Crew Program including (1) the extent to which the contractors have made progress towards certification, (2) the risks facing the program, and (3) the extent to which the program has visibility into the contractors' efforts. To do this work, GAO analyzed contracts, schedules, and other documentation; and spoke with officials from NASA, the Commercial Crew Program, Boeing, SpaceX, and independent review bodies. Both of the Commercial Crew Program's contractors have made progress developing their crew transportation systems, but both also have aggressive development schedules that are increasingly under pressure. The two contractors—Boeing and Space Exploration Technologies, Corp. (SpaceX)—are developing transportation systems that must meet the National Aeronautics and Space Administration's (NASA) standards for human spaceflight—a process called certification. Both Boeing and SpaceX have determined that they will not be able to meet their original 2017 certification dates and both expect certification to be delayed until 2018, as shown in the figure below. The schedule pressures are amplified by NASA's need to provide a viable crew transportation option to the International Space Station (ISS) before its current contract with Russia's space agency runs out in 2019. If NASA needs to purchase additional seats from Russia, the contracting process typically takes 3 years. Without a viable contingency option for ensuring uninterrupted access to the ISS in the event of further Commercial Crew delays, NASA risks not being able to maximize the return on its multibillion dollar investment in the space station. Both contractors are also dealing with a variety of risks that could further delay certification, including program concerns about the adequacy of information on certain key systems to support certification. Another top program risk is the ability of NASA and its contractors to meet crew safety requirements. The Commercial Crew Program is using mechanisms laid out in its contracts to gain a high level of visibility into the contractors' crew transportation systems. The program is using a different model than every other spacecraft NASA has built for humans. For example, NASA personnel are less involved in the testing, launching, and operation of the crew transportation system. The program has developed productive working relationships with both contractors, but the level of visibility that the program has required thus far has also taken more time than the program or contractors anticipated. Ultimately, the program has the responsibility for ensuring the safety of U.S. astronauts and its contracts give it deference to determine the level of visibility required to do so. Moving forward though, the program office could face difficult choices about how to maintain the level of visibility it feels it needs without adding to the program's schedule pressures.
Background Since the mid-1980s, we have reported that DOD employs a systemic bias toward overly optimistic planning assumptions in its budget formulation. We have said that this results in too many programs for the available dollars—plans/funding mismatch. This, in turn, leads to program instability, costly program stretch-outs, and program terminations. In essence, DOD’s budgets have been unrealistic. In 1994, we reported that the 1995 FYDP for fiscal years 1995-99, the FYDP that reflected the budgetary decisions from the bottom-up review, revealed a substantial amount of risk resulting in overprogramming that could be in excess of $150 billion. In 1995, we reported that DOD’s total program for fiscal years 1996-99 had increased by about $12.6 billion; approximately $27 billion in planned weapon system modernization programs for these 4 years had been eliminated, reduced, or deferred to the year 2000 and beyond; and military personnel, operation and maintenance (O&M), and family housing accounts had increased by over $21 billion. The net effect was a more costly program, despite substantial reductions in DOD’s weapons modernization program between 1996 and 1999. In October 1997, we reported that DOD’s 1998 FYDP for fiscal years 1998-2001 had substantial risk that programs would not be executed as planned. Specifically, we said that although DOD projected billions of dollars in savings due to management initiatives, it did not have details on how all the savings would be achieved. Also, although DOD projected no real growth in the cost of the Defense Health Program during 1998-2001, O&M funds in DOD’s health program had increased 73 percent in real terms during 1985-96. DOD acknowledged in its May 1997 QDR report that the 1998 FYDP included substantial financial risk. It stated that, compared to the 1998 FYDP, the QDR proposed a more balanced, modern, and capable defense program that can be achieved within currently proposed budgets. DOD’s Modernization Goals Have Not Been Achieved Since 1989 and the fall of the Berlin Wall, DOD has reduced the number of men and women in uniform by 33 percent and removed a significant number of Army divisions, Air Force wings, and Navy ships from the active forces. In addition, DOD decided to realign and close numerous major domestic military installations and smaller installations and to realign many others. During the same period, the defense budget has declined from $374 billion to $262 billion in constant 1999 dollars—a reduction of 30 percent. The procurement accounts have led the decline with a combined reduction of over 50 percent, whereas the research, development, test, and evaluation accounts have declined by some 20 percent. In the early years of the procurement decline, DOD said that it could afford to delay weapons procurement because as forces were reduced, the remaining units could be equipped with modern systems already fielded. But in the mid-1990s, DOD believed that the delay had to end. For example, the Secretary of Defense testified in February 1995 that a new phase of modernization had to begin immediately to sustain the quality of the force over the long term. According to the Secretary, funding for modernization would come from out-year real budget growth, reduced infrastructure costs, and acquisition reform savings. Primarily as a result of congressional actions, DOD received increased funding earlier than planned by the administration. The 1996 FYDP projected total funding for fiscal years 1996-98 of $738.5 billion. However, Congress actually appropriated $767.3 billion—an increase of almost $30 billion. Our analysis of DOD’s programs and infrastructure activities over the past several years showed that the infrastructure portion of DOD’s budget had not decreased as DOD planned. In 1997, infrastructure spending was 59 percent of DOD’s total budget, the same percentage that was reported in DOD’s bottom-up review report for 1994. We have stated in prior reports that DOD must reduce its military personnel and O&M costs if it is to reduce its infrastructure because 80 percent of DOD’s infrastructure activities are funded from these appropriation accounts. Planned funding increases for modern weapon systems have repeatedly been shifted further into the future with each succeeding FYDP. For example, since 1995, DOD has lowered the estimated funding for 1998 procurement from about $57 billion in the 1995 FYDP to about $43 billion in the 1998 FYDP. Moreover, in the 1995 FYDP, DOD planned to achieve a $60-billion annual funding level for procurement by fiscal year 1999. One year later in the 1996 FYDP, DOD projected that it would achieve that funding level in fiscal year 2000. In the 1997 FYDP, the $60-billion level had slipped again to 2001. The following table compares DOD’s procurement plans for the last five FYDPs. In the QDR, DOD acknowledges that it has a historic, serious problem—the postponement of procurement modernization plans to pay for current operating and support costs. DOD refers to this as migration of funds. According to DOD, the chronic erosion of procurement funding has three general sources: underestimated day-to-day operating costs, unrealized savings from initiatives such as outsourcing or business process reengineering, and new program demands. The QDR concluded that as much as $10 billion to $12 billion per year in future procurement funding could be redirected as a result of these three general sources. The QDR also identified other areas of significant future cost risks. To address this financial instability, the QDR recommended cuts in some force structure and personnel, the elimination of additional excess facilities through more base closures and realignments, the streamlining of the infrastructure, and reduced quantities of some weapon systems. By taking these actions, the Secretary of Defense intended that the 1999 budget and FYDP would be fiscally executable, modernization targets would be met, the overall defense program would be rebalanced, and the program would become more stable. Substantial Risks Remain in DOD’s 1999-2003 Program Despite Changes to Reduce Risks Identified in the QDR The 1999 FYDP reflects the budget blueprint outlined in the balanced budget agreement. Within the agreed to budgets, DOD made program additions and cuts to reduce risks identified in the QDR. For example, DOD increased planned funding for the Defense Health Program, which had been significantly underbudgeted in prior FYDPs; created an acquisition program stability reserve to address unforeseeable cost growth that can result from technical risk and uncertainty associated with developing advanced technology for weapon systems; and reduced planned quantities of some weapon systems such as the Joint Surveillance Target Attack Radar Systems’ aircraft, F-22 fighters, and F/A-18E/F fighters. Despite the adjustments to decrease the risk that funds would migrate from procurement to unplanned operating expenses, there are substantial risks that DOD’s program may not be executable as planned. Some Personnel Cuts and Associated Savings May Not Be Achieved DOD’s decision to reduce personnel as part of the QDR was driven largely by the objective of identifying dollar savings that could be used to increase modernization funding. We reported in April 1998 that considerable risk remains in some of the services’ plans to cut 175,000 personnel and save $3.7 billion annually by 2003. The 1999 FYDP does not include all the personnel cuts directed by the QDR. With the exception of the Air Force, the services had plans that should enable them to achieve the majority of their active military cuts by the end of 1999. The Office of the Secretary of Defense (OSD) determined that some of the Air Force’s active military cuts announced in May 1997 to restructure fighter squadrons and consolidate bomber squadrons should not be included in the 1999 FYDP because the plans were not executable at that time. Plans for some cuts included in the 1999 FYDP were incomplete or based on optimistic assumptions. For example, the Army has not decided how 25,000 of the 45,000 reserve cuts will be allocated. This decision will not be made before the next force structure review. Moreover, plans to achieve savings through outsourcing and reengineering may not be implemented by 2003 as originally anticipated. For example, the Army planned to compete 48,000 positions to achieve the majority of its civilian reductions. However, according to a senior Army official, those reductions cannot be completed by 2003. Although the Army had announced studies covering about 14,000 positions, it had not identified the specific functions or locations of the remaining positions to be studied. In addition, the Army’s plan to eliminate about 5,300 civilian personnel in the Army Materiel Command through reengineering efforts involved risk because the Command did not have specific plans to achieve these reductions.Although outsourcing is only a small part of the Navy’s QDR cuts, the Navy had an aggressive outsourcing program that involved risk. Specifically, the Navy programmed savings of $2.5 billion in the 1999 FYDP based on plans to study 80,500 positions—10,000 military and 70,500 civilian—by 2003. Moreover, the Navy had not identified the majority of the specific functions to be studied to achieve the expected savings. According to a senior Navy acquisition official, the Navy’s ambitious projected outsourcing savings may not materialize, thereby jeopardizing its long-term O&M and procurement plans. OSD recognizes that personnel cuts and the planned savings from those cuts have not always been achieved, which contribute to the migration of procurement funding. Therefore, OSD has established two principal mechanisms for monitoring the services’ progress to reduce personnel positions. First, it expects to review the services’ plans to reduce personnel positions during annual reviews of the services’ budgets. Second, the Defense Management Council is expected to monitor the services’ progress in meeting outsourcing goals. Unprogrammed Bills Could Lead to Higher O&M Costs The QDR reported that unprogrammed expenses arise that displace funding previously planned for procurement. According to DOD, the most predictable of these expenses are underestimated costs in day-to-day operations, especially for depot maintenance, real property maintenance, and medical care. The least predictable are unplanned deployments and smaller scale contingencies. Depot Maintenance The services and the defense agencies plan to obligate $73 billion for depot maintenance between 1999 and 2003. While we have noted that DOD tends to adjust maintenance requirements as it moves closer to obligating funds, the $73-billion estimate does not allow the defense agencies and the services to achieve OSD’s goal of funding 85 percent of their maintenance requirements during 1999-2003. For example, the Army is projected to meet only 68 percent of its depot maintenance requirements in 1999 and 79 percent by 2003. Real Property Maintenance Despite four base realignment and closure rounds, DOD still has excess, aging facilities and has not programmed sufficient funds for maintenance, repair, and upgrades. Each service has risk in its real property maintenance program to the extent that validated real property needs are not met. For example, in the 1999 President’s budget, the Air Force plans to fund real property maintenance at the preventive maintenance level in 1999, which allows for day-to-day recurring maintenance. This results in risk because the physical plant is degraded and the backlog of maintenance and repair requirements increases. Also, while the Marine Corps added funds during 1999-2003, the Commandant of the Marine Corps determined that the planned funding would merely minimize deterioration of its facilities. Furthermore, although the Army added approximately $1 billion for real property maintenance in the 1999 FYDP, it was not projected to meet its funding goal until 2002. DOD Health Program According to a Defense Health Affairs official, the cumulative O&M funding increase of $1.6 billion over the 1998 FYDP adequately funds the core medical mission, which is comprised of two parts, direct care and managed care contracts. However, the 1999 FYDP funding is contingent on several assumptions that contain risk. First, the Defense Health Program assumes program-related personnel reductions due to outsourcing and privatization initiatives. Savings for these efforts are estimated to be $131 million by 2003. Second, the program assumes a 1-percent savings from utilization management, such as reducing the length of hospital stays from 4 days to 3 days. Third, population adjustments due to force structure reductions play a pivotal role. The projected program assumes that the active military force will be reduced by 61,700 personnel who will be a mix of retirees and nonretirees. If a higher percentage of the end-strength reduction stems from retirements than originally planned, the program will experience higher costs because retirees and their dependents will remain part of the beneficiary population. According to a senior Defense Health Affairs official, the funded program does not include an allowance for the impact of advances in medical technology and the intensity of treatment that was identified in our previous report as a risk factor. Our recent work raises questions about whether cost savings and efficiencies in defense health care will materialize. In August 1997, we reported that a key cost-saving initiative of TRICARE, DOD’s new managed health care system, was returning substantially less savings than anticipated and the situation was not likely to improve. In our February 1998 testimony to Congress, we stated that implementation of TRICARE was proving complicated and difficult and that delays had occurred and may continue. Contingency Operations Notwithstanding the historical costs of several, often overlapping contingency operations, the 1999 FYDP provides funds for the projected “steady state” costs of Southwest Asia operations—$800 million in 1999. According to OSD officials, by design, the FYDP does not include funds for (1) the sustainment of increased operations in the Persian Gulf to counter Iraq’s intransigence on U.N. inspections, (2) the President’s extension of the mission in Bosnia, or (3) unknown contingency operations. DOD’s position is that costs for the Bosnia mission should be financed separately from planned DOD funding for 1999-2003. Further, the QDR concluded that contingency operations will likely occur frequently over the next 15 to 20 years and may require significant forces, given the national security strategy of engagement and the probable future international environment. Thus, it is likely that DOD will continue to have unplanned expenses to meet contingency operations. Risk in Meeting Procurement Goals We reported in October 1997 that, since 1965, O&M spending has increased consistently with increases in procurement spending. However, in its 1998 FYDP, DOD deviated from this historical pattern and projected increases in procurement together with decreases in O&M. In the 1999 FYDP, DOD takes a more moderate position, projecting that O&M spending in real terms will remain relatively flat while procurement increases at a moderate rate. We reported that DOD’s plans for procurement spending also run counter to another historical trend. Specifically, DOD’s procurement spending rises and falls with its total budget. However, in the 1998 FYDP, DOD projected an increase in procurement of about 43 percent, but a relatively flat total DOD budget. The 1999 FYDP procurement projections continue to run counter to the historical trend, although DOD has moderated its position. Specifically, DOD projects that procurement funding will rise in real terms during 1998-2003 by approximately 29 percent, while the total DOD budget will remain relatively flat. Program Demands and Cost Growth The QDR report cited cost growth of complex, technologically advanced programs and new program demands as two areas contributing to the migration of funds from procurement. For years, we have reported on the impact of cost growth in weapon systems and other programs such as environmental restoration. Specifically, we reported in July 1994 that program cost increases of 20 to 40 percent have been common for major weapon programs and that numerous programs experienced increases much greater than that. We continue to find programs with optimistic cost projections. For example, we reported in June 1997 that it was doubtful that the Air Force could achieve planned production cost reductions of $13 billion in its F-22 fighter aircraft program. Other DOD programs have also experienced cost growth. For example, DOD estimated in December 1997 that the projected life-cycle cost of the Chemical Demilitarization Program had increased by 27 percent over the previous year’s estimate. As stated earlier, DOD has established a reserve fund that can be used to help alleviate disruptions caused by cost growth in weapon systems and other programs due to technological problems. However, it remains to be seen whether the need will exceed available reserve funds. Policy decisions and new program demands can also cause perturbations in DOD’s funding plans, according to the QDR report. DOD has programmed $1.4 billion more for the National Missile Defense System in the 1999 FYDP than the 1998 FYDP. Despite the increase, considerable risk remains with the system’s funding. For example, technical and schedule risks are very high, according to the QDR, our analysis, and an independent panel.The panel noted that based on its experience, high technical risk is likely to cause increased costs and program delays and could cause program failure. In addition to the technical and schedule risks, the 1999 FYDP does not include funds to procure the missile system. If the decision in 2000 is made to deploy an initial missile system by 2003, billions of dollars of procurement funds would be required to augment the currently programmed research and development funds. As another example, the 1999 FYDP was predicated on the United States shifting to a Strategic Arms Reduction Treaty (START) II nuclear force posture. START II calls for further reductions in aggregate force levels, the elimination of multiple warhead intercontinental ballistic missile launchers and heavy intercontinental ballistic missiles, and a limit on the number of submarine-launched ballistic missile warheads. START II was approved by the U.S. Senate in January 1996, but its enforcement is pending until ratification by Russia’s parliament. In the absence of START II enforcement, the United States may decide to sustain the option of continuing START I force levels. According to the Secretary of Defense’s 1998 Annual Report to the President and the Congress, the 1999 budget request includes an additional $57 million beyond what otherwise would have been requested to sustain the START I level. However, maintaining this force beyond 1999 will result in additional unplanned costs. Implications for DOD’s Future Modernization In its QDR report, DOD recognized that current procurement trends have long-term implications. Specifically, “As successive FYDPs reduced the amount of procurement programmed in the six-year planning period, some of these reductions have accumulated into long-term projections, creating a so-called ‘bow wave’ of demand for procurement funding in the middle of the next decade.” The QDR report concludes that “this bow wave would tend to disrupt planned modernization programs unless additional investment resources are made available in future years.” The bow wave is particularly evident when considering DOD’s aircraft modernization plans. In September 1997, we reported that DOD planned to buy or significantly modify at least 8,500 aircraft in 17 aircraft programs at a total procurement cost of $334.8 billion (in 1997 dollars) through the aircrafts’ planned completions. DOD’s planned funding for these aircraft programs exceeds in all but 1 year, between fiscal year 2000 and 2015, the long-term historical average percentage of the budget devoted to aircraft purchases. Compounding these funding difficulties is the fact that these projections are conservative. The projections do not allow for real program cost growth, which historically has averaged at least 20 percent, or for the procurement of additional systems. However, as a result of the QDR, the 1999 FYDP service aircraft procurement accounts have been moderated. Compared with the 1998 FYDP, the 1999 FYDP reduces projected aircraft funding by $3.9 billion, or 4 percent. Observations As the Chairman of the Joint Chiefs of Staff testified last week, DOD continues to face difficult funding decisions in trying to balance current readiness against modernization, infrastructure, and quality of life issues.DOD has and will continue to face difficult decisions in an effort to balance its program within projected budgets. Optimistic planning provides an unclear picture of defense priorities because tough decisions and trade-offs are avoided. In order for DOD to have an efficient and effective program and for Congress to properly exercise its oversight responsibilities, it is critical that DOD present realistic assumptions and plans in its future budgets. Messrs. Chairmen, this concludes our prepared statement. We will be glad to answer any questions you or members of the Subcommittees may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Defense's (DOD) budgetary plans to modernize its forces, focusing on: (1) DOD's experience over the last few years in trying to shift funds from nonmission or infrastructure programs to acquisition programs; (2) the risks in DOD's ability to execute its Future Years Defense Program (FYDP) for fiscal years 1999 through 2003; and (3) the implications of current procurement trends for DOD's future modernization. GAO noted that: (1) although DOD has reduced military and civilian personnel, force structure, and facilities over several years, it has been unable to follow through with planned funding increases for modern weapon systems; (2) this has occurred, in part, because DOD has not shifted funds from infrastructure to modernization; (3) in 1997, infrastructure spending was 59 percent of DOD's total budget, the same percentage that was reported in the bottom-up review report for 1994; (4) consequently, DOD has repeatedly shifted planned funding increases for modern weapon systems further into the future with each succeeding FYDP; (5) DOD acknowledged in its 1997 Quadrennial Defense Review report that it has had to postpone procurement plans because funds were redirected to pay for underestimated operating costs and new program demands, and projected savings from outsourcing and other initiatives had not materialized; (6) although DOD made adjustments in the 1999 FYDP to decrease the risk that funds would migrate from procurement to unplanned operating expenses, the 1999-2003 program, like previous programs, is based on optimistic assumptions about savings and procurement plans; (7) a further indication of risk can be found in DOD's procurement plans; (8) the rise and fall of DOD's procurement spending over the last 33 years has followed the movement in the total budget; (9) however, DOD projects that procurement funding will rise in real terms during 1998-2003 by approximately 29 percent while the total DOD budget will remain relatively flat; (10) DOD's current procurement trends have longer term implications; (11) as DOD reduced programmed procurement in successive FYDPs, it has reprogrammed some procurement to the years beyond the FYDP to create a bow wave of demand for procurement funds; (12) this bow wave, according to DOD, tends to disrupt planned modernization programs unless additional funds are made available; (13) GAO has reported that DOD employs overly optimistic planning assumptions in its budget formulation which leads to far too many programs for the available dollars; (14) optimistic planning provides an unclear picture of defense priorities because tough decisions and trade-offs are avoided; and (15) in order for DOD to have an efficient and effective program and for Congress to properly exercise its oversight responsibilities, it is critical that DOD present realistic assumptions and plans in its future budgets.
Background When veterans receive care from VA for non-service-connected conditions, the law allows VA to bill the veterans’ private health insurers and retain these third-party collections to supplement its appropriations for health care. Third-party insurers include individual and group insurance plans, Medicare supplemental insurance plans, and self-insured employer plans. In January 1997, VA proposed a 5-year plan to operate with a flat annual appropriation of $17 billion per year through fiscal year 2002. As part of this plan, VA anticipated that by the end of fiscal year 2002, it would obtain 10 percent of its funding from third-party collections and other alternative revenue streams, such as veteran copayments, Medicare payments, and proceeds from sharing agreements (where VA sells services to the Department of Defense, private hospitals, and other providers). In fiscal year 2000, VA acknowledged that it would not meet its 10-percent goal, in part because the Congress did not authorize Medicare payments to VA for health care provided to Medicare enrollees. For fiscal year 2002, VA estimates that revenue from alternative sources would be about 4 percent of its medical care funding, or $896 million. VA’s third-party revenue operations consists of five sequential processes. (See table 1.) With the exception of six networks that have consolidated some of these processes, revenue operations management is currently decentralized and the processes are performed at each medical center where health care is provided to veterans. In the 1990s, VA sponsored two studies comparing the cost-effectiveness of contracting out most revenue operations. The results of these studies were not fully conclusive and were contradictory. The first study, conducted by Birch and Davis and reported on in 1995, concluded that VA’s costs would be slightly less if operations were maintained in-house instead of using a contractor. In contrast, the second study, conducted by Coopers and Lybrand and reported in 1998, found that, based on three contractors’ estimates, contracting out would be less expensive. Collections Have Increased in the Past Year, but Underlying Problems Continue If VA’s current pattern of third-party collections continues into the last months of fiscal year 2001, VA will significantly increase its third-party collections for the first time since fiscal year 1995. However, the increased collections are likely the result of VA’s generally charging more for each episode of care—which occurred with the implementation of billing reasonable charges for individual services. Not only do long-standing problems in revenue operations appear to persist—which have been heightened with the implementation of reasonable charges—when compared to private sector benchmarks, VA’s collections performance is poor. Moreover, the revenue program’s information systems and lack of departmentwide standardization create overarching weaknesses for managing and improving revenue operations and collections nationally. VA Collections Have Increased Since the Implementation of Reasonable Charges Based on monthly collections for the first 10 months of fiscal year 2001, we project that VA will receive over $500 million from third-party collections this year. This amount is a significant increase over last fiscal year—and the largest amount collected since fiscal year 1995, when $523 million was collected. (See fig. 1.) VA expected its collections to increase as a result of its reasonable charges billing system, which was implemented in September 1999. Under this system, VA began using itemized billing for the services provided—rather than charging flat fees, as it had done prior to 1999. According to a VA analysis, in the first 8 months of fiscal year 2001, VA treated about the same number of patients but collected 34 percent more dollars than the comparable period in fiscal year 1999 before reasonable charges were implemented. Long-Standing Problems in VA’s Revenue Operations Limit VA’s Collections Performance Although the implementation of the reasonable charges billing system has increased VA’s collections over the past year, VA faces a number of long- standing problems in managing its revenue operations. In addition, VA’s collections performance falls short of private sector benchmarks. Inadequate intake procedures, lack of sufficient physician documentation, shortage of qualified coders, and insufficient automation diminish VA’s collections. Patient intake: To determine which veterans have insurance, VA must rely on voluntary disclosure of insurance by veterans. Nationally, VA bills insurers for only 16 percent of patients treated but reports that substantial numbers of veterans have probably not disclosed their insurance. Medical documentation: About 74 percent of surveyed facilities reported that weaknesses in physicians’ documentation of care for billing purposes limits collections. One official was concerned that his facility could not meet its timeliness goal unless clinical staff provided more timely documentation for billing. He also noted that not all billable care could be charged to insurers because of incomplete or insufficient documentation. A VA contractor this year estimated that VA could collect $459 million more nationally if physicians’ oversight of resident physicians was properly documented. However, the contractor also found that some physicians were concerned that spending more time on documenting care for billing purposes would take away from the time spent with patients. Coding: VA has acknowledged its difficulty maintaining sufficient staff who can correctly code medical procedures and services for billing. A 2000 study also found these problems and attributed them to competition from other employers for coders, low VA entry-level wages, and VA’s frequent problems with retaining and promoting qualified and proficient staff. At three sites we visited, for example, revenue officials noted that they had difficulties hiring experienced coders at VA salaries. Billing: A VA-sponsored 2001 study of the possible uses of commercial software found limitations in VA’s current billing software that led to manual processes. As a result, there is an increased probability of errors, slower production, and lower collections. A contractor who provided services to both VA and private sector hospitals also told us that VA’s process for creating bills and identifying errors is less automated than the private sector. Accounts receivable: The majority of VA’s accounts receivables exceed 90 days and VA is concerned that insufficient follow-up is given to collections. According to a contractor that services both VA and private sector hospitals, VA staff at one facility were not following standard business practices of contacting insurers to resolve problems with bills but instead were just sending additional copies of bills. At another site we visited, two accounts-receivable staff were having difficulties reducing a backlog of about 3,000 to 4,000 unpaid claims because each day they were only able to make a total of 60 follow-up calls to insurers. Private sector hospitals appear to manage these processes more efficiently. VA’s average lag times from the date of discharge or care to the date of billing are 74 days for inpatient care and 143 days for outpatient care. In comparison, private sector benchmarks indicate that 5 and 6 days, respectively, are more typical in the private sector hospitals. The majority of VA’s accounts receivables are over 90 days old from date of discharge or care, compared to a private sector benchmark of 29 percent of uncollected dollars exceeding 90 days. A VA-sponsored 1998 study estimated that VA’s full cost to collect one dollar of third-party revenue was 7 cents for inpatient bills and 48 cents for outpatient bills, compared to a benchmark of slightly over 2 cents in the private sector. Comparisons between VA and private sector hospitals, however, are not perfect because their operations and payer mix differ. For example, VA bills both for facility and provider charges, whereas the two private sector hospitals we visited only billed for facility charges; medical groups bill separately for physician fees. In addition, VA can only collect on the Medicare supplemental portion of the payment, whereas the private hospital can collect both Medicare and supplemental payments. VA reports that about 70 percent of its bills are sent to Medicare supplemental insurers, and for these, it can only expect to collect abut 20 percent of the billed amount. Consequently, VA would have a higher cost-to-collect ratio even if it were as efficient as its private sector counterparts. Although these differences make comparisons with private sector average performance imperfect, the disparity of performance suggests that the average VA operation is not very efficient. Implementing Reasonable Charges Created New Process Challenges and Exacerbated Existing Ones By replacing a billing system that used a limited number of flat fees for broadly defined types of care with one based more on individual charges for the services actually given, the new reasonable charges system made accurate coding and documentation more critical for billing and increased workload because multiple claims could result from a single episode of care. Before implementing its reasonable charges billing system, VA used nine inpatient rates, based on a particular hospital unit, such as a surgical bed section, and one outpatient visit rate. Under the reasonable charges billing system, VA assigns hundreds of diagnoses codes and thousands of procedure codes based on the documentation of the services provided to veterans. Therefore, before reasonable charges, an outpatient surgery, such as one to repair a hernia, would result in one all-inclusive charge. Under reasonable charges, VA must now create separate bills for physician charges and for outpatient facility charges for the same outpatient surgery. Although estimates varied at the sites we visited, one official estimated that using reasonable charges increased the number of bills by about 5 times. Officials at all sites we visited reported acquiring more staff—both in- house and through contracts—to process bills under reasonable charges. For example, since reasonable charges was implemented, one site’s total number of coders and billers more than doubled, from 7 to 19. Based on the 133 facilities reporting to our survey, increases of VA staff continued into this fiscal year. Full-time equivalent positions for revenue operations have increased from 2,284 in fiscal year 2000 to 2,411 by the middle of fiscal year 2001. Decentralized Responsibilities Also Present Challenges VA is also challenged to successfully direct and manage a highly decentralized program with practices and performances that vary widely by facility. VA has noted that although its national policies address key components of revenue operations, they are not followed in a standard and consistent manner. For example, VA has encouraged physicians to enter their notes electronically; however, physicians at one location we visited were dictating their notes for transcription, which were then transferred to the electronic system. At another site, an official told us that while other facilities were able to create an electronic interface with an intermediary to transmit bills to insurance companies, his facility had not. Therefore, bills were keyed in and printed, then re-keyed in by data entry staff to allow electronic transmission to the intermediary. Collections performance also varies widely from facility to facility. For example, one facility takes an average of 16 days to send an inpatient bill, while another averages 342 days. Facilities’ performance in cost to collect were similarly diverse. According to data reported to us by facilities for the first half of fiscal year 2001, VA’s average personnel cost to collect a dollar was 24 cents across both inpatient and outpatient bills, with facilities in the top 25 percent of performance reporting personnel costs to collect a dollar ranging from 5 to 15 cents and facilities in the bottom 25 percent of performance reporting personnel costs to collect a dollar ranging from 31 to 64 cents. In addition, the data accumulated from the various facility systems are not adequate to nationally manage performance. VA notes that the lack of software and data standardization across facilities impairs its ability to consistently measure performance and set performance goals. Moreover, because VA’s accounting system does not break out third-party collections or operations costs, net revenues (that is, gross revenue collections minus operations costs) cannot be monitored at a national level. According to an official at VA’s national headquarters, data on facility performance are also unreliable because they are not verified. For example, some facilities reported high and unreasonable numbers—such as thousands of days to bill—which the facilities could not explain. It is not clear then, whether variations in facility performance reflect better or worse performance or unreliable data. Efforts to Improve Collections Provide Little Basis to Select the Best Alternative for Optimizing Net Revenue The various efforts VA has undertaken to improve its revenue operations fall short of providing a basis to select among the two major alternatives— contracting out or using VA staff. VA’s 1999 business plan to the Congress indicates that contracting could improve operations by incorporating the private sector’s best billing and collection practices and efficient automation. While some networks and facilities have contracted out portions of their revenue operations, these efforts do not provide VA the data needed to compare contracting with using in-house staff. Moreover, VA’s efforts have not sufficiently considered the importance of net revenues—collections minus operations costs—a key criterion for choosing among alternatives. VA also initiated a pilot to test the relative cost-effectiveness of contracting and using in house staff, but as a result of changes in the pilot’s design, it is unlikely to yield data that allow comparisons of each alternative’s net revenues. At the Secretary’s initiative, VA developed its 2001 Revenue Cycle Improvement Plan. Our preliminary assessment of the plan is that it also will not provide VA a sufficient rationale to choose wisely among alternatives for optimizing net revenues. Some Networks and Facilities Are Using Contracting VA’s 1999 business plan indicates that once networks consolidated their revenue operations, contracting might improve operations because competitive bids for the contract should reflect the cost of an efficient operation. The business plan also indicates that contract incentives and the desire to keep the contract could encourage contractors to keep costs down and profitably collect every billed dollar. The 1999 business plan further suggested consolidating some network operations with a high- performing VA unit within the network as a comparison to contracting. Both approaches could increase standardization of processes and introduce best practices. Some networks have consolidated some revenue operations. Networks and facilities have also used contracting, but these contracting efforts have primarily been small-scale and aimed at addressing immediate problems. Few facilities have contracted out an entire process of revenue operations. (See table 2.) VA facilities and networks reported to us that most of their contracts are viewed as temporary to meet immediate needs, such as supplementing their staffs to reduce backlogs of claims. Revenue operations managers also voiced a number of concerns that indicated that they would be unlikely to pursue extensive contracting. Our survey showed that less than 1 percent of either network or facility revenue-operations managers reported that contracting out the majority of revenue operations would be the most effective alternative. In addition, they noted various implementation barriers that would have to be resolved. For example, contractors would need to be trained about VA’s rules and regulations, an interface between VA’s and the contractor’s computer systems would need to be developed, and union issues that would arise around the loss of VA jobs would need to be addressed. A recent VA-sponsored survey of eight health care revenue collection firms and a VA-hosted vendor conference indicated that contractors have an interest in performing most revenue processes—from intake through accounts receivable—and they had anticipated some of the barriers facilities and networks identified. For example, contractors identified the need to establish an interface with VA’s data systems. Without such an interface, the contractor might only be able to provide contract workers to use VA’s data systems rather than to bring the full capacity of the contractor’s own data systems to improve operations. The contractors also believed that if the contractor were only engaged in the latter parts of revenue operations—billing and accounts receivable—the effect on increasing collections would be limited because generating additional revenue is critically affected by front-end activities such as insurance identification, documentation, and coding. While there were similar concerns about consolidation, it appears to have more acceptance among VA revenue managers than extensive contracting. In our survey, 36 percent of network respondents and 11 percent of facility respondents indicated that network-level consolidation would be the most effective alternative for managing third-party collections. For consolidation, respondents cited union and morale issues regarding the movement or loss of jobs and sharing information between facilities as implementation challenges. Pilot to Test Contracting Out Will Not Provide Needed Data VA has initiated a pilot that was intended to test the contracting alternative and to use the data from this test to evaluate whether consolidation using in-house staff or contractors would improve net revenues and other key outcomes. However, it fails to meet this key objective. While the pilot may provide some information on whether consolidation of some processes at a network-level could improve net revenues and other outcomes, it will not provide useful data for choosing between the in-house and contract options because a pilot site for gathering similar information on contracting was not established. According to a VA headquarters official, the networks were reluctant to volunteer for the pilot because of concerns that if the contract did not work out, they would have lost the expertise of the in-house employees who had worked on revenue operations. Two networks have agreed to pilot the consolidation using in-house staff alternative, and a third network will pilot consolidation with limited contracting out. A fourth network has contracted out one task of patient intake—insurance verification. This fourth network had also planned to use another contract for coding, billing, and accounts receivable, although retaining VA employees to process backlogs in these same areas. This pilot could have yielded useful data for comparing the two alternatives. However, after a number of delays and plans to limit the contract to 3 months at a single facility, VA has not found an acceptable contractor and even this abbreviated test of contracting out is unlikely to occur. Current Improvement Plan Focuses on Enhancing In- House Operations, Not Net Revenues VA’s Revenue Cycle Improvement Plan, finalized in September 2001, does not position the Department to choose between the two major alternatives for optimizing its third-party collections because the plan does not call for a comprehensive comparison of these two options nor does it focus on net revenues—collections minus operations costs. Instead, the plan seems to present a decision to improve in-house operations in the short term, and later consider the alternatives. In the short term, the plan calls for 24 actions to address problems throughout VA’s revenue operations—such as training coders and tracking documentation—over a 2-year period. However, the plan does not establish a way to gather data on the alternatives because nearly all of the efforts to improve revenue processes are to be undertaken at the facility level with VA staff. The only planned consolidation would be establishing, for a 3-month period this year, a single in-house group at headquarters or using a contractor to handle accounts receivable older than 90 days. VA also plans to implement in the near term three national contracts for electronic services. However, these contracts will primarily supplement facilities’ billing and collection efforts rather than replace VA operations. One nationwide contract will establish an electronic data interchange for the electronic submission of bills to insurers to help ensure faster turnaround of payments and reduce errors due to automated edits during the transmission process. A second contract—a Medicare Remittance Advice contract—will provide an explanation of what Medicare would have paid for VA’s medical services, thereby clarifying the remaining amount for the supplemental insurer to pay. The third contract—a lockbox contract—will replace the current manual, paper-based process of receiving and posting payments with an automated process. The plan’s vision for considering both consolidating and contracting is for the longer term. For example, the plan calls for considering the viability of contracting out billing and accounts receivable as well as the supporting software system after the 24 actions have been taken. Moreover, although the September 2001 plan calls for cost-benefit analyses for specific proposed actions with major investments, the plan is unclear as to how VA will decide which, if any, investments it will make prior to deciding whether it will choose the contracting alternative. For example, the plan indicates the need to acquire a new computerized billing and collections system—which according to a 2001 VA-sponsored study of commercial software could be a major investment, likely from $75 million to $125 million. However, in a discussion of future contracting, the plan states that VA could avoid large capital expenditures and gain a faster deployment if it used a contractor that provided the billing and accounts receivable software. Finally, the plan does not consider net revenue. Without such a consideration, VA will not be able to measure the extent to which funds are actually generated to supplement appropriated funds for veterans’ health care. Concluding Observations VA’s efforts to date have not provided it the data needed to compare program expenditures and collections and to choose among the major alternatives of contracting or using in-house staff for its revenue operations. Nor does VA establish net revenue as a criterion in its recent improvement plan to address weaknesses in facility managed operations and later consider the in-house staffing and contractor alternatives. Without a credible business case for increasing expenditures that result in more net revenue, VA’s budget officials will be at odds regarding how to spend sizeable portions of VA’s resources—on revenue operations or on medical care. While VA has used competitive sourcing to a limited extent, it could realize additional savings by competing, through the use of OMB’s Circular A-76, the costs of government providing these services in-house versus the costs of buying them from the private-sector. Our work at the Department of Defense shows that, by competitive sourcing under OMB Circular A- 76, costs decline through increased efficiencies whether the government or the private sector wins the competition to provide services. This work indicates that savings are probable for VA, but we cannot estimate potential savings from competitive sourcing because of uncertainty regarding the availability of interested contractors, the price of contractor services, and the extent to which VA is able to decrease its operating costs in a competitive process.
The Department of Veterans Affairs (VA) has reversed the general decline in its third-party collections for the first time since fiscal year 1995. The fiscal year 2001 increase appears to be largely the result of VA's implementation of a new system, known as the reasonable charges billing system, which allowed VA to move from a flat-rate billing system to one that itemizes charges. However, long-standing problems in VA's revenue operations persist, and VA's collections performance is poor when compared to that of the private sector. VA's attempts at consolidation using either in-house or contractor staff have provided little basis for selecting the best alternative to VA's collections problems. Also, VA's recent 2001 Revenue Cycle Improvement Plan does not call for a comprehensive comparison of alternatives, nor does it focus on net revenues--collections minus operations costs. To collect the most funds for veterans' medical care at the lowest cost, VA needs to develop a business plan and detailed implementation approach that will provide useful data for optimizing net revenues from third-party payments.
Background This section provides information on federal and state regulation of drinking water and wastewater infrastructure; federal funding for drinking water and wastewater infrastructure projects; and our prior work on coordination of drinking water and wastewater infrastructure funding and leading collaborative practices and key considerations for collaborative mechanisms. Federal and State Regulation of Drinking Water and Wastewater Utilities The Safe Drinking Water Act and the Clean Water Act authorize EPA’s Drinking Water SRF and Clean Water SRF programs, respectively, and authorize EPA to regulate the quality of drinking water provided by community water supply systems and the discharge of pollutants into the nation’s waters. Under the Safe Drinking Water Act, EPA, among other things, sets standards to protect the nation’s drinking water from contaminants, such as lead and arsenic. The Clean Water Act generally prohibits the discharge of pollutants from “point sources”—such as discharge pipes from industrial facilities and wastewater treatment plants—without a permit. Under the acts, EPA may authorize states to carry out their own safe drinking water and clean water programs in lieu of the federal program, as long as the state programs are at least as stringent as the federal ones. As a result, most states have primary responsibility for enforcing the applicable requirements of the Safe Drinking Water Act and administering the applicable requirements under the Clean Water Act. Specifically, for drinking water utilities, all states except Wyoming and the District of Columbia have primary permitting and enforcement authority under the Safe Drinking Water Act. For wastewater utilities, all states except Idaho, Massachusetts, New Hampshire, and New Mexico have full or partial permitting and enforcement responsibility under the Clean Water Act. Drinking water and wastewater systems are managed by utilities that may be organized differently depending on the city or community they serve. For example, drinking water service may be provided by one utility, and wastewater service may be provided by a separate utility, or a single utility may provide both services. Regardless of the configuration, a utility can be owned and managed by a municipality, a county, an independent district or authority, a private company, or a not-for-profit water association, among others. Utilities may serve a city and neighboring area, a county, or multiple counties. To pay for operations, maintenance, repair, replacement, and upgrades of their infrastructure, drinking water and wastewater utilities generally raise revenues by charging their customers for the services they provide. Utilities generally identify planned investments as part of their operating budgets and long-term capital improvement plans. Examples of key types of drinking water infrastructure include groundwater wells, dams, reservoirs, facilities to treat water for drinking, water storage tanks, laboratories to test the water, and drinking water distribution pipelines and the service lines that connect them to buildings. Examples of key wastewater infrastructure include sewer lines, tanks, and facilities to treat wastewater. Individuals or properties not served by utilities have private wells and septic systems (see fig. 1). Federal Funding and Assistance for Drinking Water and Wastewater Infrastructure Projects Eight federal agencies administer a number of programs that provide access to funding and assistance for drinking water and wastewater infrastructure. Some agencies’ programs allocate funds to state agencies as grants, and the state agencies in turn use the funds to make loans or award grants to local governments or to utilities for projects. EPA’s Drinking Water and Clean Water SRF program, HUD’s Community Development Block Grant Program, and FEMA’s Hazard Mitigation Grant Program provide funds in this way. The other five agencies make loans, award grants, or provide assistance directly to local communities or utilities to fund water and wastewater infrastructure. The Corps, Reclamation, Indian Health Service’s Sanitation Facilities Construction Program, the Economic Development Administration, and USDA’s Rural Utilities Service provide funds in this way. Additional details about the programs are described below, in descending order by the amount of their fiscal year 2016 funding: EPA. EPA provides annual grants to states to help finance utility drinking water and wastewater projects nationwide through the Drinking Water and Clean Water State SRF programs. States use this funding, and provide a required minimum 20 percent match, to capitalize their SRFs. The states use the funds to provide low-cost loans or other financial assistance to communities for, among other things, a wide range of water infrastructure projects. Loan repayments and interest payments, as appropriate, are returned to the SRFs and are available for future loans. However, the ability to sustain the SRF depends on the loans being fully repaid. In addition, EPA provides funds from the Drinking Water and Clean Water SRF programs to tribal nations throughout the United States for drinking water and wastewater projects. In fiscal year 2016, EPA’s Drinking Water and Clean Water SRF programs were funded at $863 million and $1.39 billion, respectively. USDA. Under its Water and Waste Disposal Program, USDA’s Rural Utilities Service provides grants and loans for drinking water and wastewater projects in rural areas—defined by USDA as a city, town, or unincorporated area that has a population of no more than 10,000 inhabitants. USDA can provide assistance for various activities, such as construction of drinking water treatment and sewage collection facilities, connection of single-family homes to drinking water distribution or wastewater collection lines, and training for utility operators. The Rural Utilities Service allocates program funds to USDA offices in each state; these offices are required to loan or grant their funds by 10 months into the year and return any unobligated funds to USDA headquarters for reallocation to other states. Under the program, a staff of engineers and loan specialists works with local communities and their utilities to fund projects, and USDA also provides funding for technical assistance to help utilities apply for funding and operate and maintain their drinking water and wastewater infrastructure. In fiscal year 2016, USDA provided $549 million in grants and $1.21 billion in loans through its state offices. HUD. HUD disburses grants to states and local governments through its Community Development Block Grant program to fund housing, infrastructure, and other community development activities. The annual appropriation for the block grants is allocated according to formulas so that, after setting aside specified amounts for Indian tribes, insular areas, and special purposes, 70 percent is allocated among participating metropolitan cities and urban counties and 30 percent among the states to serve cities with populations of fewer than 50,000 and counties with populations of fewer than 200,000. In addition, federal law requires that not less than 70 percent of the total Community Development Block Grant funding be used for activities that benefit low- and moderate-income persons. Historically, according to HUD data, approximately 10 percent of total funding has been used for drinking water and wastewater infrastructure. Total Community Development Block Grant funding was $3.01 billion in fiscal year 2016, and, according to HUD officials, at least $381.5 million was used to fund drinking water and wastewater infrastructure. Indian Health Service. HHS’s Indian Health Service funds and constructs drinking water and wastewater projects through its Sanitation Facilities Construction program. This assistance is available to tribal nations within the United States and includes various projects such as distribution and collection lines, treatment facilities, and home connections. Indian Health Service’s Sanitation Facilities Construction Program was funded at $99.4 million in fiscal year 2016. Reclamation. Reclamation provides different types of assistance for drinking water and wastewater infrastructure in the 17 western states, as directed by Congress. Reclamation received authorization, under the Rural Water Supply Act of 2006, to establish a rural water supply program. Under the program, Reclamation was authorized to work with rural communities and Indian tribes to identify municipal and industrial water needs and options to address such needs through appraisal investigations and in some cases feasibility studies. The authority for the program expired at the end of fiscal year 2016. Congress must authorize construction of rural water projects before they can begin. In fiscal year 2016, Reclamation received $83.5 million in funding for six previously authorized rural water infrastructure projects. Reclamation also provides funding and assistance to states through certain WaterSMART (Sustain and Manage America’s Resources for Tomorrow) programs that promote the efficient use of water, integrating water and energy policies to support the sustainable use of all natural resources, and coordinating the water conservation activities of Interior’s agencies. First, Reclamation’s Title XVI program helps states and communities create supplemental water supplies by investigating and identifying opportunities for reclamation and reuse of municipal, industrial, domestic, and agricultural wastewater, naturally impaired ground and surface waters, and the program provides funding for the design and construction of facilities to reclaim and reuse wastewater. Further, Reclamation’s Basin Studies Program partners with state and local governments to identify strategies to address imbalances in water supply and demand, including the development of adaptation and mitigation strategies to meet current and future water demands. In fiscal year 2016, the Title XVI program received $32.4 million and the Basin Studies Program received $5.2 million in funding. Corps. The Corps provides various types of assistance for drinking water and wastewater projects in communities, as directed by Congress. Congress has authorized and appropriated funds for the Corps to provide assistance projects that benefit rural communities in need of water or wastewater infrastructure, among other things, through the Corps’ Section 219 Environmental Infrastructure Program. In fiscal year 2016, this program received $55 million in funding. Under Section 14 of the U.S. Flood Control Act of 1946, the Corps’ Emergency Streambank and Shoreline Protection program can plan, design, and construct erosion control projects that protect public infrastructure. In fiscal year 2016, this program received $2 million in funding. Under Section 22 of the Water Resources Development Act of 1974, the Corps’ Planning Assistance to States program can assist states, local governments, and tribes with the preparation of comprehensive plans for development and conservation of water and related land resources. In fiscal year 2016, this program received $6 million in funding. Finally, the Corps manages about 140 reservoirs containing approximately 10 million acre-feet of storage for municipal and industrial water supply. Under the Water Supply Act of 1958, the Corps enters into agreements with water users for water storage within Corps reservoirs. Economic Development Administration. Through its Public Works and Economic Development Program, Commerce’s Economic Development Administration provides grants to economically distressed areas to help revitalize, expand, and upgrade their physical infrastructure, including public works investments such as drinking water and wastewater infrastructure. In fiscal year 2016, the Public Works and Economic Development program received $100 million in funding, about $14.4 million of which was used for drinking water or wastewater infrastructure projects, according to Economic Development Administration estimates. FEMA. Through its Hazard Mitigation Grant Program, FEMA may provide funding for drinking water and wastewater infrastructure projects in certain circumstances when the President has declared a major disaster. States receive Hazard Mitigation Grant Program funding if approved for them as part of a major disaster declaration, and grant funding is competitively awarded by the states for projects in local communities. Communities or their utilities can submit applications to the state for projects for their water and wastewater facilities that the state may choose to include in its Hazard Mitigation grant application to FEMA. As of May 2017, FEMA’s 31 presidentially declared major disasters during fiscal year 2016 had resulted in over $533 million in available funds for the Hazard Mitigation Grant Program, $3.9 million of which was for drinking water and wastewater projects. Each of the eight federal agencies we reviewed has its own programs and processes for providing funding and assistance for drinking water and wastewater infrastructure projects. Communities or utilities have the discretion to apply to one or more federal or state programs for funding. In some cases, federal and state agencies coordinate to jointly fund the same project if the project is too large for one agency to fund or if joint funding makes the project more affordable for the utility. In other cases, programs may work together by separately funding different parts of a large project or different phases of a multi-year project. Prior GAO Work on Coordination on Drinking Water and Wastewater Infrastructure Our previous work on federal drinking water and wastewater infrastructure funding programs for rural areas has raised questions about the sufficiency of coordination among programs. In December 2009, we found that EPA, USDA, and other agencies that fund drinking water and wastewater infrastructure for rural communities along the U.S.-Mexico border did not have coordinated policies and processes. We suggested that Congress consider establishing an interagency mechanism, such as a task force on water and wastewater infrastructure, to evaluate the degree to which gaps in water and wastewater infrastructure programs exist in the U.S.-Mexico border region and the resources needed to address them. In April 2014, EPA and USDA published a report describing a joint effort to address the critical public health and environmental challenges in the U.S.-Mexico border region. This effort was created partly in response to our December 2009 report in an effort to leverage collective resources to identify needs within the border region and to implement compatible and coordinated policies and procedures. Similarly, in October 2012, we found that federal funding for drinking water and wastewater infrastructure is fragmented across multiple agencies and programs. We also found that potentially duplicative application requirements when applying to multiple federal or state programs, including preliminary engineering reports and environmental analyses, may make it more costly and time-consuming for communities to complete the application process. We recommended that EPA and USDA ensure the timely completion of an interagency effort to develop guidelines to assist states in developing their own uniform preliminary engineering reports to meet federal and state requirements. We also recommended that the agencies work together and with state and community officials to develop guidelines to assist states in developing uniform environmental analyses that could be used, to the extent appropriate, to meet state and federal requirements for water and wastewater infrastructure projects. EPA and USDA neither agreed nor disagreed with these recommendations but have taken actions, along with HUD and other agencies, to respond to these recommendations. First, in 2015, EPA, USDA, HUD, and Indian Health Service adopted a uniform preliminary engineering report template and associated guidance for federal and state officials. Second, in February 2017, EPA and USDA issued a joint memorandum identifying five practices for interagency collaboration on drinking water and wastewater infrastructure projects, including reducing the potential for duplication of effort during the environmental review process. Our September 2012 report on key considerations for implementing interagency collaborative mechanisms discusses a variety of mechanisms to implement interagency collaborative efforts and issues for federal and state agencies to consider in implementing collaborative mechanisms. Issues for consideration include the following, many of which are related to practices to enhance and sustain collaboration identified in our previous work: including all relevant participants, documenting written guidance and agreements, sustaining leadership, clarifying roles and responsibilities, bridging organizational cultures, identifying resources, and defining outcomes and accountability. Federal and Selected State Agencies Collect Information on Water Infrastructure Needs through Surveys, Program Administration, and Studies The eight federal agencies and the six selected states identify drinking water and wastewater infrastructure needs through surveys, the administration of programs, and studies. More specifically, EPA collects information on nationwide water infrastructure needs through surveys of communities, and seven other federal agencies collect narrower data on specific projects as part of their program administration. Four of the six selected states collect information on projects in their respective states through surveys of communities and statewide studies. EPA collects and reports information on nationwide drinking water and wastewater infrastructure needs. Specifically, the Safe Drinking Water Act and the Clean Water Act direct EPA to collect information on drinking water and wastewater projects that are eligible for the SRF programs. EPA collects the information from a sample of utilities every 4 years, with the assistance of states, through surveys of needs, and it publishes the results in its Drinking Water Infrastructure Needs Survey and Assessment and its Clean Watersheds Needs Survey. In these reports, EPA estimated infrastructure needs, including the costs of capital improvement projects to repair, replace, and upgrade existing drinking water and wastewater infrastructure over the next 20 years. In 2013 and 2016, when EPA published its most recent survey results, EPA estimated approximately $655 billion in drinking water and wastewater infrastructure needs nationwide. For the six states we selected to review, EPA estimated a total of approximately $123 billion in drinking water and wastewater infrastructure needs. EPA’s estimates include, for example, the following types of projects: Drinking water projects serving a range of community sizes. EPA sends a questionnaire to all large utilities and a sample of medium utilities in each state. The utilities complete the questionnaire, provide documentation of projects, and send the questionnaire to their state coordinator for review. The coordinator then provides the information for EPA’s final review. The information provided includes projects to repair or replace drinking water sources, transmission and distribution pipelines, treatment facilities, and storage facilities. For example, a transmission and distribution project could include replacement or rehabilitation of pumping stations or distribution pipelines due to age or deterioration, and treatment projects could include the construction, expansion, and rehabilitation of infrastructure to reduce contamination through various treatment processes. To estimate the needs of small communities’ drinking water utilities (defined by EPA as those serving 3,300 and fewer persons), EPA used the results of its 2007 survey of utilities in these areas and updated the costs using a model it developed for this purpose. In addition, for selected years, EPA conducts surveys to estimate the needs of water systems for American Indians and Alaska Native villages. Wastewater projects serving a range of community sizes. EPA also surveys utilities on wastewater projects for a range of community sizes and defines small communities in this survey as those with populations of fewer than 10,000 people. States provide EPA with documentation on utilities’ projects, and EPA performs a final review of the information. The information provided includes projects related to wastewater treatment, wastewater conveyance, combined sewer overflows, stormwater management, recycled water distribution, and decentralized wastewater treatment systems. For example, a conveyance project could include replacement or repair of pipes, and a combined sewer overflow project could include reconstructing combined sewers to prevent overflows or repairing deteriorating sewer lines. EPA’s estimates are not required to be comprehensive estimates of all drinking water and wastewater infrastructure needs, and they do not include projects that address some existing and future drinking water and wastewater infrastructure needs. Specifically, they do not include the following types of projects: Projects that are ineligible for SRF loan funding. The Drinking Water SRF and the Clean Water SRF restrict funding for certain types of projects. For example, the Drinking Water SRF does not allow funding for (1) rehabilitation or replacement of water supply dams and reservoirs, which may be the responsibility of the Corps, Reclamation, or state or local entities; and (2) privately owned infrastructure such as drinking water wells not part of a drinking water system. The Clean Water SRF does not allow, for example, funding for (1) privately owned wastewater facilities and (2) wastewater services for federal facilities. Projects about which states choose not to collect or submit information. Under the Safe Drinking Water Act, EPA’s assessment of drinking water needs is used to allocate funding to states from the Drinking Water SRF program. However, under the Clean Water Act, EPA’s allocation of funding to states from the Clean Water SRF program is based on formulas established by the statute, not on EPA’s assessment. According to EPA officials, because of this statutory formula and the level of effort needed to complete the assessment survey, some states may not always participate in the Clean Watershed Needs survey or may limit their level of effort in providing information on infrastructure needs. For example, South Carolina did not participate in the 2012 assessment, and Alaska, North Dakota, and Rhode Island did not participate in the 2008 assessment. Projects submitted by utilities without proper documentation on project scope and cost. According to EPA documents and officials from two selected states, the survey results may underestimate the needs of utilities because some communities lack the technical and financial resources to complete the assessment for the survey. If communities did not provide the necessary documentation, EPA did not include their projects in the assessments. For example, according to the 2011 Drinking Water Infrastructure Needs Survey and Assessment, EPA rejected 15 percent of all submitted projects because they either did not meet documentation criteria or appeared to be ineligible. Projects planned for more than 5 to 10 years from the date of the assessment. According to EPA documents and officials from two selected states, communities often do not have the strategic planning capacity to anticipate their needs for the full 20 years covered by the assessments. As a result, EPA’s most recent assessment of wastewater infrastructure needs noted that nearly all projects included are those that will be completed within 5 years. Similarly, an EPA official told us that the typical planning time frame reflected in the assessment of drinking water needs is 7 to 10 years. The seven other federal agencies in our review collect more narrowly focused information on drinking water and wastewater infrastructure projects as part of administering their specific programs. These agencies generally do not collect information on needs through recurring surveys such as those conducted by EPA; instead, through the administration of their programs, they receive the information through specific assessments, congressional authorizations, and loan or grant project funding proposals by state and local governments and communities. Information collected by these agencies includes the following: USDA. The agency collects information on drinking water and wastewater infrastructure projects funded or partially funded through its administration of the Rural Utilities Service’s Water and Waste Disposal Loan and Grant Program. According to USDA officials, the information is gathered through an online application system through which applicants submit project information for program funding. USDA uses this system to track funded projects and to collect and track information on projects that were submitted but not funded. As of fiscal year 2016, USDA officials we interviewed said they maintained a backlog of $2.5 billion in projects that have not yet received funding through the Water and Waste Disposal Program. In addition, in July 2015, the Rural Community Assistance Partnership—a contractor hired by USDA—published a one-time assessment that described, ranked, prioritized, and identified potential improvements to drinking water and wastewater needs of 2,177 colonias in 35 border counties in Arizona, California, New Mexico, and Texas. According to the report, the assessment was the first colonias-level evaluation of drinking water and wastewater needs along the U.S.-Mexico border; the report did not estimate the cost or number of projects to address the needs identified. HUD. The agency collects information on projects it funds or partially funds through its administration of the Community Development Block Grant program, including drinking water and wastewater infrastructure projects to support community development, primarily in low- and moderate-income communities. HUD collects these data in its Integrated Disbursement and Information System. HUD officials we interviewed estimated they funded $66 million for drinking water and wastewater infrastructure in the selected states in fiscal year 2016. Indian Health Service. As required by the Indian Health Care Improvement Act, Indian Health Service annually collects and reports information on the drinking water and wastewater infrastructure needs of Indian nations and native communities nationwide. In consultation with tribes, the agency’s 12 area offices collect data on projects designed to meet an immediate drinking water or wastewater need. Projects are entered and tracked in the agency’s Sanitation Deficiency System database. According to Indian Health Service documents, the database is updated annually to account for inflation and changes in federal and state regulations, to add projects designed to address new needs, and to remove projects that have been completed. In 2015, the Indian Health Service database included an estimated cost of more than $2.66 billion to upgrade all tribal communities’ drinking water and wastewater infrastructure systems to comply with all drinking water supply and water quality laws. Corps. The Corps tracks congressionally authorized water projects and studies, including drinking water and wastewater infrastructure projects, in centralized databases. The agency also collects information on the potential repair and upgrade of the dams it manages, some of which impound reservoirs to be used as drinking water sources. The Corps manages 715 dams nationwide and, based on estimates it developed, has $24 billion in upgrades and repairs to these facilities over the next 50 years. Reclamation. The bureau collects some information on tribal and nontribal drinking water infrastructure needs for congressionally authorized projects through its administration of the Rural Water Supply program and through projects and studies under the WaterSMART Title XVI and Basin Studies programs. Under the Rural Water Supply program, Reclamation collects information on water supply needs—including drinking water supply needs—for congressionally authorized rural water supply projects. The agency also collects information on needs through feasibility studies conducted for potential rural water supply projects, including studies on drinking water supply needs. In addition, Reclamation collects information on needs to modify dams for dam safety purposes. Specifically, as of May 2017, Reclamation manages 492 dams and has identified 15 dams as high- and significant-risk dams that are in need of modification to reduce risk to communities below the dams, at a cost of approximately $1.25 billion. According to Reclamation officials we interviewed, the agency estimates an additional 6 to 10 dams will require modification for dam safety purposes within the next 3 to 4 years, but Reclamation has not developed the overall cost estimate to address the safety modifications for these dams. FEMA. The agency collects information on hazard mitigation projects for drinking water and wastewater infrastructure from states where the President has declared a major disaster. FEMA tracks funding through a category system that may include general types of facilities in each category, but the agency does not specifically track drinking water and wastewater infrastructure projects. Economic Development Administration. The Economic Development Administration collects applications for drinking water and wastewater infrastructure projects from distressed communities for revitalization, expansion, or upgrade of drinking water and wastewater infrastructure, among other projects. The agency collects information on the drinking water and wastewater infrastructure projects it funds in its Operations Planning and Control System. According to agency officials, in fiscal year 2016, the Economic Development Administration provided approximately $14.4 million in funding for 10 drinking water and wastewater infrastructure projects nationwide. Four of the six states we selected for review—New Mexico, New York, North Dakota, and Tennessee—have collected data on drinking water and wastewater infrastructure projects or needs through their own surveys of communities or statewide studies, in addition to participating in EPA’s assessments. New Mexico annually collects capital improvement plans for water and wastewater infrastructure projects, as does Tennessee. New York conducted a one-time statewide assessment of its water needs for 2008 to 2028. North Dakota biennially surveys its communities for their drinking water infrastructure projects but does not collect wastewater infrastructure projects. The other two selected states— Alaska and California—participated in EPA’s assessment but did not independently collect data on drinking water and wastewater infrastructure needs. Appendix II provides more details of the four states’ efforts to collect information on their drinking water and wastewater infrastructure needs. Federal Agencies Provide Technical Assistance and Funding to Support State and Local Planning for Future Drinking Water and Wastewater Infrastructure Needs Of the eight federal agencies we reviewed, three—the Corps, Reclamation, and FEMA—provide technical assistance and funding to support planning efforts in the selected states for future conditions that may affect drinking water and wastewater infrastructure needs. The three agencies’ efforts have usually involved developing or updating documents such as state water plans, hazard mitigation plans, flood management plans, or drought plans. The remaining five federal agencies have at times been involved in long-term planning for such conditions and may provide grant funding to help support such work, but they do not have established programs that offer technical assistance or funding for such purposes. Corps Support for State Planning expanded reservoirs to provide additional drinking water supply and may also require additional wastewater treatment capacity. Drought can necessitate constructing new pipelines to connect to additional sources of water. Flooding, sea level rise, and storm surges may necessitate construction of flood walls or other protective infrastructure to avert damage to drinking water and wastewater treatment plants and to prevent sewers from overflowing and contaminating drinking water sources. Wastewater treatment plants are particularly susceptible to flooding because they are generally built in low-lying areas near bodies of water so wastewater can be gravity fed from higher elevations to lower elevations and so treatment plants can easily discharge water after treating it. Land surface changes, including coastal erosion or melting of permafrost (subsoil that is normally permanently frozen, found in about 85 percent of Alaska), may require relocating facilities or reinforcing drinking water or wastewater pipelines. 2007 and 2008, created its Water Resources Technical Advisory Committee—which included officials from the Corps, Tennessee Valley Authority, and the U.S. Geological Survey—to improve regional water supply planning for future drought conditions and population growth. The committee developed several state and regional water resource management plans, updated the state-wide drought contingency plan, and developed drought contingency plan guidance for community drinking water and wastewater treatment systems in Tennessee. The City of Minnewaukan, North Dakota, received assistance in 2010 through the Corps’ Section 22 Planning Assistance to States program to identify alternatives for flood risk reduction, including alternatives to improve the resiliency of the city’s drinking water and wastewater infrastructure because of flooding. The Corps recommended relocating a portion of the city, including key drinking water and wastewater infrastructure. According to North Dakota officials, the city was then able to use funding from FEMA’s Hazard Mitigation Grant program, Commerce’s Economic Development Administration, HUD’s Community Development Block Grant program, and the North Dakota Drinking Water SRF to implement the relocation. In Alaska, in 2007, a state workgroup composed of state and federal officials, including officials from the Corps and the Denali Commission, was tasked to develop an action plan addressing the effects of climate change on coastal and other vulnerable communities in Alaska. The workgroup was part of a larger effort created by the Governor in 2007 to lead the preparation and implementation of an Alaska climate change strategy to respond to risks to infrastructure, including water and wastewater infrastructure, from permafrost degradation, erosion, and flooding. From 2005 to 2009, the Corps conducted a baseline erosion assessment to determine the vulnerabilities of Alaskan communities to coastal erosion that helped inform the workgroup’s action plan. The assessment identified 26 communities whose viability was threatened by erosion. As a result, in 2009, the workgroup recommended developing a methodology to prioritize state and federal funding for projects to protect existing infrastructure, including drinking water and wastewater infrastructure, from risks due to permafrost degradation, erosion, and flooding. To address the recommendation, as of March 2017, using funding provided by the Denali Commission, the Corps was collaborating with the University of Alaska-Fairbanks to collect additional data on erosion and flooding. The Corps and the University of Alaska-Fairbanks plan to use the data to develop an index for the aggregate risk of permafrost degradation, erosion, and flooding on infrastructure, including drinking water and wastewater infrastructure, in Alaskan communities by 2018. After Superstorm Sandy in 2012, New York’s legislature passed the Community Risk and Resiliency Act in 2014, directing state agencies to consider risks from sea level rise, flooding, and storm surges in their facility siting, permitting, and funding decisions, among other things. The act applies to drinking water and wastewater infrastructure projects, including those funded by the state’s Drinking Water and Clean Water SRF programs. In implementing the act, New York adopted regulations in February 2017 that established sea level rise projections, and the state will require applicants for certain state programs to demonstrate that they have taken sea level rise into account for project planning. The Corps participated in a state study that informed these projections and provided technical assistance. Reclamation Support for State Planning Reclamation has assisted selected states in planning for future conditions that affect water and wastewater infrastructure, in part by conducting studies such as an examination of future water supply and demand. Examples of selected state and local planning with Reclamation assistance include the following: In California, Reclamation provided assistance to the state and to several local communities to conduct basin studies examining future water supply and demand in several river basins. For example, Reclamation partnered with the state and several local authorities to examine the impact of rising sea levels, drought, and increasing population, among other conditions, on future water supply and water quality in the Sacramento River Basin, the San Joaquin River Basin, and the Tulare Lake Basin. The basins study completed in March 2016 found that the San Joaquin and Tulare Lake basins faced the risk of future deficits in water supply, and that sea level rise would pose a threat to municipal water supply and water quality. In addition, Reclamation has helped fund planning, design, and construction of local projects to reuse wastewater through its Title XVI program. For example, Reclamation has provided $20 million to plan and construct a water recycling effort by the City of Watsonville, California, and the Pajaro Valley Water Management Agency. By providing recycled water for irrigation, the project is intended to reduce overdrawing of groundwater from aquifers, which can lead to contamination of the aquifers if seawater intrudes into the groundwater. The recycled water blends discharge from the city’s wastewater treatment plant with higher-quality water and distributes it to irrigate a portion of more than 6,000 acres of farm land. In New Mexico, Reclamation also assisted with several basin studies. For example, the agency helped the City and County of Santa Fe in New Mexico complete a basin study in 2015 to assess variations in available water supply stemming from climate change and other factors. The study found that the Santa Fe area’s population is expected to increase by about 80 percent by 2055 and faces a water shortfall if actions are not taken. The study also identified potential adaptation strategies to help meet the projected demand for water over the next 40 years. Following up on a project identified in the basin study, Reclamation’s Title XVI program provided funding to the City and County of Santa Fe for a 2016 study of the feasibility of reusing wastewater as drinking water and for replenishing aquifers for future water supply. In addition, Reclamation and USDA provided technical assistance to the state and communities as they updated New Mexico’s regional water plans. For example, Reclamation and USDA officials served on the steering committees that helped develop the 2016 updates for the San Juan Basin and Jemez Y Sangre Regional Water Plans. In North Dakota, Reclamation conducted a study in September 2012 and provided data that helped inform the state’s biennial state water plan. The plan identifies drinking water infrastructure projects intended to address both current and future water supply challenges, including challenges posed by flooding and drought. As an example of its contribution to the state’s water plan, in 2012, Reclamation estimated the projected demand for water in 10 counties serviced by North Dakota’s Northwest Area Water Supply Project between 2010 and 2060, including demand resulting from changes in population and climate change. FEMA Support for State Planning FEMA has provided funding to reduce or eliminate long-term risks to drinking water and wastewater infrastructure from natural disasters, as well as technical assistance to communities to help them plan for disaster resilience of drinking water and wastewater infrastructure. Examples of selected state and local planning with FEMA assistance include the following: California’s 2013 hazard mitigation plan incorporates potential threats to drinking water and wastewater infrastructure. FEMA’s Hazard Mitigation Grant Program has provided funding for local projects in California since 2013 to replace or reinforce water storage tanks to mitigate wildfire or earthquake risks identified in the plan. FEMA, along with other agencies, provided technical assistance for a 2013 study conducted by New York State that reviewed the resiliency of wastewater infrastructure on Long Island. The study made recommendations to improve and expand critical wastewater infrastructure in Nassau and Suffolk Counties to make infrastructure more resilient to storms and flood events. Agencies Have Taken Certain Actions to Coordinate Project Funding While Facing Some Challenges Federal and state agencies in the six selected states have taken certain actions to coordinate funding for drinking water and wastewater infrastructure projects. Yet, the federal and state agencies face challenges that make it difficult for federal and state agencies to use all available federal funds. Federal agencies have also taken some actions to help them address some of the challenges they faced in funding projects. For example, in 2017, EPA and USDA issued a joint memorandum that identified practices to improve state-level coordination on drinking water and wastewater infrastructure that is intended to help improve collaboration among federal and state agencies. Federal and State Agencies Have Taken Certain Actions to Coordinate Funding for Drinking Water and Wastewater Infrastructure Projects In the selected states, federal and state agencies took some actions to coordinate funding for drinking water and wastewater infrastructure projects. We identified several types of coordination actions that some federal and state agencies had undertaken since 2011 in some selected states. These actions are consistent with key considerations for implementing interagency collaborative mechanisms and practices to enhance and sustain collaboration that we have identified in our previous work. Examples of coordination actions taken by the six selected states and various federal agencies include the following: Including all relevant participants. Participation by all relevant participants in an interagency coordinating group is one of the key considerations we have identified for implementing such a collaborative mechanism. With the exception of Tennessee, five of the selected states had coordinating groups. Certain federal and state agencies have participated in interagency coordinating groups that met at least once annually. For example, California’s group meets quarterly, according to federal and state officials, and includes officials from USDA, Reclamation, the state’s SRF and Community Development Block Grant programs, the state’s Department of Water Resources, and other state programs. Alaska’s group focuses on funding programs for small communities (those with less than 1,000 people), which are primarily tribal funding programs for Alaska Native villages. The group meets monthly, according to federal and state officials, and includes USDA, Indian Health Service, EPA, the Alaska Native Tribal Health Consortium—a statewide tribal organization that manages most of the design and construction of sanitation facilities in rural Alaska—and Alaska’s Department of Environmental Conservation, which also manages the SRF program. Documenting written agreements between the agencies. Having written guidance and agreements documenting how agencies will collaborate is a key consideration of interagency collaborative mechanisms identified in our prior work. In three of the selected states—California, New York, and North Dakota—federal and state agencies developed written agreements for their coordinating groups. For example, the 1998 agreement between federal and state agencies in California sought to encourage more efficient use of funds and reduce administrative costs for the agencies and their funding recipients. Federal and state agencies agreed to, among other things, provide staff and leadership to form the state’s coordinating group, meet regularly to foster cooperation in project funding, remove as many barriers as possible in program regulations, and jointly fund projects when feasible and efficient. Similarly, New York’s 2003 agreement between federal and state agencies sought to simplify the application process and formalize coordination of jointly funded drinking water and wastewater activities. Among other things, the participating agencies agreed to establish an interagency coordinating group to meet regularly, facilitate exchange of information among agencies, jointly fund projects when feasible and appropriate, and provide outreach on funding programs to potential recipients. Agencies in North Dakota signed a memorandum of understanding with federal agencies in 1997 with the purpose of establishing greater communication and coordination on water supply development funding in the state. The memorandum stated that the agencies would meet at least biannually but did not include other agreements about how they would coordinate. Sustaining leadership for the group. Sustaining leadership is another key consideration of interagency collaborative mechanisms we previously identified. According to agency documents and officials, federal and state agencies had established leadership for the interagency coordinating groups in three of the selected states— California, New Mexico, and New York. According to federal and state officials, a state agency provided leadership for North Dakota’s coordinating efforts. Bridging organizational cultures. Bridging organizational cultures is a key consideration of interagency collaborative mechanisms we identified. One way agencies can bridge organizational cultures is to adopt common application requirements or procedures. Federal and state agencies in five states—Alaska, California, New Mexico, New York, and Tennessee—had taken at least one action toward adopting common application requirements or procedures. For example, in California and New York, agencies developed a common income survey for determining funding eligibility. Identifying resources. Identifying the resources needed to initiate or sustain the collaborative effort is a key consideration of interagency collaborative mechanisms we identified. Some agencies in the selected states took actions consistent with this key consideration. For example, federal and state agencies conducted joint marketing and outreach to communities and utilities about the agencies’ funding opportunities in five of the states—Alaska, California, New Mexico, New York, and North Dakota. In addition, officials from federal and state agencies in all of the selected states said they shared some information among themselves on infrastructure project applications that were funded or being considered for funding, either through their coordinating groups or informally between individual programs. For example, agencies in Alaska shared information on projects for small Alaska villages, and in California agencies shared information on jointly funded projects. In New Mexico, USDA and the state’s Community Development Block Grant programs began sharing information when they joined the state’s coordinating group in 2014 and 2017, respectively. Furthermore, agencies in some selected states jointly funded projects with other federal or state agencies. For example, according to federal and state officials in New York, agencies often worked together to make projects more affordable to communities by combining grant and loan funds from multiple agencies. In Tennessee, USDA and the SRF programs have jointly funded projects with the state’s Community Development Block Grant program, but state and federal officials said their agencies generally try to fully fund projects, or phases of them, themselves. Not all federal and state agencies in selected states took action to coordinate for various reasons, such as timing and resources, according to federal and state agency officials. For example, some of the federal agencies that provide funding for drinking water and wastewater infrastructure did not participate in all state coordinating groups. Reclamation officials, for instance, did not participate in New Mexico’s coordinating group because the state coordinating group was in the process of being organized and Reclamation had not been asked to participate, according to agency officials. In another example, the Indian Health Service did not participate in California’s coordinating group because the group primarily identifies and addresses needs in nontribal communities, according to agency officials. The Economic Development Administration, state agencies managing FEMA Hazard Mitigation Grant program funds, and the Corps also did not participate in any of the groups, in part because they have limited roles or funding for drinking water or wastewater infrastructure. In addition, some selected states did not develop formal written agreements for their coordinating groups or use common procedures or surveys. For example, New Mexico was in the process of organizing its coordinating group and planned to consider a written agreement once the group was established, according to state officials. In addition, while some states had developed common procedures or surveys, not all agencies used them. For example, state officials said that California’s common income survey was not used by the state’s Community Development Block Grant program because of differences in survey requirements and the Community Development Block Grant’s definition of low- and moderate-income persons. Several Challenges to Funding Projects Make It Difficult for Agencies to Provide All Available Federal Funds in Selected States In the selected states, four key challenges can make it difficult for federal and state agencies to provide all federal funds available for drinking water and wastewater infrastructure projects: limited community demand for loan funding, limited technical or financial capacity of some communities, differing requirements among federal and state funding programs, and difficulty developing a set of projects ready for funding. Federal and state officials face the following key challenges to funding projects. Limited community demand for loan funding. USDA officials we interviewed in all of the selected states, as well as state program officials in five of the selected states, said that communities prefer grants, which do not need to be repaid, and are reluctant to take on loans and pay interest on them. Because the USDA Water and Waste Disposal loan and grant program and state SRF programs do not usually fund projects entirely with grants, finding applicants for state and federal programs can be difficult. In addition, USDA officials in New Mexico, New York, and California cited competition from state- funded grant programs as a challenge for federal and state agencies to use available loan and grant funding. For example, according to USDA funding data and a USDA official in New York, USDA’s New York state office did not obligate $3.5 million of the grant funds available in 2015 and 2016 because a state program provided grants to four communities that had already been funded with a combination of USDA loan and grant funds. Communities’ limited technical or financial capacity. In five of the selected states, some of the federal and state officials said that some communities have limited technical or financial expertise or capacity for loans, which is a challenge for agencies because it can prevent communities from identifying projects or applying to state and federal agencies for project funding. For example, state SRF program officials in New Mexico and state Community Development Block Grant program officials in New York noted that many small communities do not have the technical capacity to evaluate their drinking water or wastewater systems and to plan projects. State SRF program officials in Tennessee also said that many communities do not have the financial capacity to repay loans and therefore may not qualify for federal and state loan programs. Differing requirements among funding programs. In five of the selected states, federal or state officials said that differing application requirements and processes among funding programs are a challenge. For example, differing requirements can make it difficult for federal and state agencies to jointly fund projects or for applicants to apply to multiple programs for funding. A USDA official in New Mexico, for example, noted that differing requirements for applicants’ preliminary engineering reports have been a challenge, as did officials from the Alaska Native Tribal Health Consortium, which administers parts of Indian Health Service’s program in Alaska. In addition, an Indian Health Service official in North Dakota and state Community Development Block Grant program officials in California and Tennessee described challenges with agencies’ differing requirements for environmental reviews. They each identified projects they funded between fiscal years 2011 and 2016 that involved some duplication of environmental analysis—either an additional environmental review or additions to other programs’ environmental reviews; this duplication can increase the length and cost of projects, according to officials. Difficulty developing a set of projects ready for funding. Federal and state officials we interviewed in four states noted that even though communities in those states may have drinking water or wastewater infrastructure needs, they may not have identified specific projects or developed them to the extent needed to apply for funding. For example, state SRF program officials in California and Alaska and USDA officials in Tennessee said that it is hard for communities to put together the plans they need to get a project ready for funding, and that this can be more difficult than construction of the project. Similarly, Indian Health Service officials in California noted that it is challenging for communities to develop projects that Indian Health Service or other agencies are likely to fund. For example, tribal projects may be constrained by the need to obtain easements across nontribal lands, as well as concerns about water rights. These challenges can make it difficult for federal and state agencies to provide all funds available for loans and grants to communities. For example, USDA state offices in the selected states did not have enough applicants with projects that were ready to fund, and the offices did not lend a total of about $193 million in loan funding available for drinking water and wastewater infrastructure projects from fiscal years 2012 through 2016 to communities in those states. Specifically, in fiscal year 2016, USDA’s state office for California was unable to lend about $21 million in available USDA loan funding to communities. USDA’s state offices for New Mexico, New York, and Tennessee each were unable to lend about $10 million to $11 million in available USDA loan funding, and Alaska was unable to lend about $6 million in loan funding. North Dakota’s state office, however, lent all of its available loan funding to communities, as well as an additional $7 million for loans in fiscal year 2016. Unlike other programs we reviewed that allocate funding directly to states, such as EPA’s SRF and HUD’s Community Development Block Grant programs, USDA’s Water and Waste Disposal Program allocates funding to its state offices, which in turn loan or grant funding for projects in local communities. The state offices must return funds that are not obligated by August to the agency’s headquarters for reallocation to other state offices. According to USDA officials, the purpose of this process is to ensure funds are used nationwide in an effective, timely, and efficient manner for projects that are ready to receive funding, and the program maintains a nationwide backlog of applications at any given time. USDA headquarters officials said that in general, state offices return funds in part because there are not enough projects in a state that are ready to be funded in that fiscal year and not because of lack of need for the funding. Two of the selected states also had difficulties using all available SRF program funding in recent years. In a 2014 report, EPA’s Office of Inspector General found that in California and New Mexico—two of the five states the Office of Inspector General reviewed—23 percent and 26 percent, respectively, of the programs’ cumulative federal funding remained unspent or unliquidated as of September 2013. According to the report, EPA considers any state with a balance above 13 percent to have a high unliquidated obligation balance. California’s and New Mexico’s programs had $401 million in obligated funds that remained unspent—$358 million and $43 million, respectively, in unliquidated obligations—according to the report. Among other challenges, according to the report, the Inspector General indicated that unliquidated obligations result from states not having projects that are ready for loan execution or from states funding projects that are not ready to proceed. Staff in all five states reviewed indicated that they had had difficulty in the past getting projects from applicants that were ready to proceed for funding. In addition, EPA’s Office of Inspector General cited the availability of other, more attractive funding options for potential applicants as a reason for the difficulties issuing loans in these states. The Office of Inspector General also found that when loans are not issued and hundreds of millions of SRF dollars remain idle, states miss opportunities for improvements to their communities’ drinking water infrastructure. USDA and EPA have taken some steps at the national level to increase the use of their funds within states. For example, USDA officials told us that the agency offered training in 2016 for several USDA state offices that were not using their full allocations, and USDA has started working with EPA’s national SRF program staff to improve coordination in states that are not using their full USDA allocations. The officials also said that they are planning further outreach to additional USDA state offices in 2017 and plan to work with some EPA regional offices. In 2014, EPA issued a national strategy for reducing unliquidated obligations under the Drinking Water SRF. The strategy outlined six practices for states to use to help liquidate past years’ funds and maintain lower levels of unliquidated obligations in future years. The practices include focusing on projects that are ready to proceed. EPA’s strategy also emphasized the importance of states’ (1) solicitations of water infrastructure projects to protect public health, (2) proactive efforts to help get projects ready to proceed to financing, and (3) efforts to ensure that water systems within their jurisdictions are well informed of the financing opportunities available through the Drinking Water SRF. In 2016, EPA reported that California’s Drinking Water SRF program had made substantial progress in its effort to quickly and efficiently expend funds; however, EPA remained concerned about the extent of unliquidated obligations for New Mexico’s Drinking Water SRF. Federal and state agencies within selected states have also taken some actions to help address some of the challenges they face in funding projects. For example, in New York, agencies have helped address communities’ preferences for grant funding and limited financial capacity for loans by coordinating to jointly fund projects with a combination of grant and loan funds from different agencies. In addition, agencies have worked together through coordinating groups to help address communities’ limited technical capacity. For example, EPA and USDA have jointly funded training and technical assistance in Alaska to address the technical capacity of rural drinking water and wastewater utilities. Furthermore, agencies have developed common application requirements or procedures to help them address the challenge of differing requirements among funding programs. For example, California’s coordinating group uses a common funding inquiry form, which a USDA official said is one of the group’s most effective actions and saves applicants time. Finally, agencies have shared information and conducted joint outreach to help address difficulties with developing a set of projects ready for funding. For example, North Dakota’s State Water Commission takes the lead on conducting outreach to communities to identify drinking water infrastructure projects in the state. The Commission then shares its prioritized list of drinking water projects with other agencies, which work together informally to discuss funding and projects. According to USDA program officials we interviewed in North Dakota, these actions have helped them identify and prioritize projects and provide nearly all of their available funding to communities. Most recently, to help improve state-level coordination between state SRF programs and USDA state offices on drinking water and wastewater infrastructure project funding, EPA and USDA issued a joint memorandum in February 2017 that outlined five coordination practices that states’ SRF programs and USDA state offices are encouraged to use. These practices include participating in a statewide coordinating group, conducting joint marketing or outreach, adopting common application materials, adopting a common environmental review process, and periodically reexamining internal processes to identify opportunities for streamlining and increasing coordination. Agency Comments and Third-Party Views We provided a draft of this product to all eight agencies for comment. Seven of the agencies, EPA, USDA, HUD, Indian Health Service, the Corps, Reclamation, and FEMA provided technical comments that we incorporated as appropriate. One agency, the Economic Development Agency, did not have any comments. We also provided appropriate portions of the product to the six states that we reviewed. New Mexico had technical comments that we incorporated as appropriate. Tennessee did not have any comments. The remaining four states did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, the Secretary of Agriculture, the Secretary of Defense, the Secretary of Commerce, the Secretary of Health and Human Services, the Secretary of Homeland Security, the Secretary of Housing and Urban Development, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact J. Alfredo Gómez at (202) 512-3841 or gomezj@gao.gov or Anne-Marie Fennell at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to describe (1) how federal agencies and selected states identify drinking water and wastewater infrastructure needs; (2) how federal agencies have supported selected states in planning for future conditions that may affect such needs; and (3) the extent to which federal and state agencies have coordinated in funding drinking water and wastewater infrastructure projects, and any challenges they face in funding these projects. To address these objectives, we reviewed federal programs in eight agencies, as shown in table 1. We reviewed our previous reports to identify the agencies that provide funding or planning assistance to states or communities for drinking water and wastewater infrastructure and identified eight agencies: the Environmental Protection Agency (EPA), the Department of Agriculture’s (USDA) Rural Utilities Service, the Department of Commerce’s Economic Development Administration, the Department of Defense’s Army Corps of Engineers, the Department of Health and Human Services’ Indian Health Service, the Department of Homeland Security’s Federal Emergency Management Agency, the Department of Housing and Urban Development (HUD), and the Department of the Interior’s Bureau of Reclamation. The programs provide funding or planning assistance to states or communities for drinking water and wastewater infrastructure. We reviewed these programs in a nonprobability sample of 6 states— Alaska, California, New Mexico, New York, North Dakota, and Tennessee. We selected these states based on the number of federal agencies that provided funding in the state for drinking water and wastewater infrastructure projects, presence or absence of a formal coordination group, and geographic diversity. Specifically, we determined whether four federal agencies—Reclamation, Corps, Indian Health Service, and Economic Development Administration—funded drinking water and wastewater projects in each of the 50 states from fiscal years 2011 through 2015. We chose these agencies for our selection process because EPA and USDA provide funding in all 50 states and the Federal Emergency Management Agency and HUD could not provide state-level data for drinking water and wastewater infrastructure projects. We then identified whether states had coordinating groups. To obtain a sample of states with geographic diversity, we sorted states by the four regions of the United States as defined by the U.S. Census Bureau. We then selected states that either had the most federal agencies that provided funding for projects or did not have a coordinating group, and we selected at least 1 state from each Census region. The sample of states is not generalizable, and the results of our work do not apply to all 50 states; however, they provide illustrative examples of state infrastructure programs. Some of these federal programs are administered directly by the federal agencies through their regional or state offices, while others are administered by state agencies. Therefore, our review included the federal offices and state agencies responsible for overseeing and administering these programs. We conducted site visits to interview federal and state officials in Alaska, California, New Mexico, and Tennessee, and held teleconferences to interview officials in North Dakota and New York. In addition, we interviewed federal officials from the Denali Commission, as well as selected state officials administering federal funds from the Appalachian Regional Commission and Delta Regional Authority. To describe how federal agencies and selected states identify drinking water and wastewater infrastructure needs, we identified federal requirements directing EPA and Indian Health Service to collect information on drinking water and wastewater infrastructure needs, reviewed these agencies’ most recent reports on needs, and interviewed EPA and Indian Health Service officials about these efforts. We also reviewed any national, regional, or state reports on needs issued over the last 10 years by the other six federal agencies and the six selected states. Specifically, we reviewed the report of a joint USDA-EPA effort to identify the drinking water and wastewater infrastructure needs of certain communities in the U.S.-Mexico border region, as well as reports on needs issued by 4 of the selected states: New Mexico, New York, North Dakota, and Tennessee. To conduct this work, we did not define the concept of need or conduct an independent review of the studies that identify needs for drinking water and wastewater infrastructure, and we did not evaluate the legitimacy of the claims. We assessed the reliability of data in EPA’s reports by reviewing documentation on data collection and interviewing agency officials. We determined that the data were sufficiently reliable for the purpose of estimating national needs for drinking water and wastewater infrastructure projects that fall within the scope of EPA’s reports. We also assessed the reliability of data from the Indian Health Service’s report by interviewing agency officials about the data. We determined the data were reliable for our purposes of reporting total needs. To describe how the eight federal agencies have supported the selected states in planning for future conditions that may affect drinking water and wastewater infrastructure needs, we reviewed federal and selected state program and planning documents, including basin studies, erosion and sea-level rise studies, and flooding and drought response plans, and we conducted semi-structured interviews with or obtained written responses from federal officials and selected state officials. State agencies included state water boards and commissions, infrastructure funding agencies, and emergency management agencies as appropriate. We used these documents and interviews to identify examples where federal agencies have assisted selected states and local communities in planning for future conditions that might affect their drinking water and wastewater infrastructure needs. To describe the extent to which the eight federal agencies and selected states have coordinated in funding projects and any challenges they face, we reviewed federal and state program documents, interagency agreements, and project data and interviewed federal and state agency officials on project funding and coordination, as well as on challenges they face in funding projects. We used this information to (1) assess whether and how federal and state agencies have implemented leading collaboration practices and key considerations for collaborative mechanisms in the selected states, (2) examine whether coordination has helped agencies efficiently use available federal funding for projects, and (3) identify challenges to funding projects. We used key considerations from our previous work on interagency collaborative mechanisms: including all relevant participants, documenting written guidance and agreements, sustaining leadership, clarifying roles and responsibilities, bridging organizational cultures, identifying resources, and defining outcomes and accountability. We identified actions that agencies had taken consistent with these key considerations. We also used leading collaboration practices from our previous work, such as identifying and addressing needs by leveraging resources, as appropriate. To examine whether coordination has helped agencies use available federal funding for projects efficiently, we looked for and analyzed examples of projects jointly funded by multiple federal programs and unfunded or delayed projects. We obtained and analyzed funding data for fiscal year 2016 for the eight agencies and assessed the reliability of the data by obtaining information from agency officials. In some cases, agencies provided obligation data, while other agencies provided budget allocation data. We determined the data were reliable for our purposes of reporting total funding available by agency. We reviewed a report by EPA’s Office of Inspector General on the unliquidated obligations in five state Drinking Water SRFs. We conducted this performance audit from March 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected States’ Assessments of Their Drinking Water and Wastewater Infrastructure Needs Four of the six states that we selected for review developed assessments of their drinking water and wastewater infrastructure needs. The details of their assessments are included below. New Mexico. The New Mexico Department of Finance and Administration annually collects 5-year capital improvement plans from local governments and tribes through a web-based process. The purpose of collecting these plans is to establish planning priorities for anticipated capital projects and encourage entities to plan for, fund, and develop infrastructure at a pace that sustains their activities. The plans include time frames, estimated costs, and the details of each proposed capital improvement project for drinking water or wastewater infrastructure, including repair or replacement of existing infrastructure and the development of new infrastructure. The plans also include projects for dams and water infrastructure for agriculture, which are excluded from the Environmental Protection Agency’s (EPA) assessments. The state lists the projects on its Infrastructure Capital Improvement Plan website. In 2016, the New Mexico Legislative Finance Committee analyzed the state and local infrastructure capital improvement plans for 2017 to 2022 and identified $3.2 billion in drinking water and wastewater infrastructure projects. New York. In 2008, New York’s Departments of Health and Environmental Conservation conducted one-time assessments of the state’s drinking water and wastewater infrastructure needs over the next 20 years. The departments conducted the drinking water study because EPA’s drinking water needs survey underreported valid projects, and they conducted the wastewater study because of the need to develop a sustainable infrastructure funding program at the federal, state, and local levels. The assessments included needs that were part of EPA’s drinking water and wastewater assessments. They also included estimates of the costs of certain other needs that EPA excludes because they are not eligible for the EPA’s State Revolving Fund program; these include the costs to repair and replace dams and private wells. Specifically, New York’s drinking water needs assessment estimated a cost of $533.6 million to rehabilitate the state’s approximately 511 dams used for water supply purposes, as well as a cost of $1.8 billion to replace or rehabilitate almost all of the state’s 1.5 million private drinking water wells over the next 20 years. In addition, the state’s wastewater needs assessment estimated a cost of $693 million to replace faulty residential septic systems in 150 municipalities with community wastewater treatment systems. Together, the state’s assessments estimated a total of $74.9 billion to repair, replace, and update New York’s existing drinking water and wastewater infrastructure over the next 20 years. North Dakota. North Dakota’s State Water Commission surveys communities on their planned drinking water infrastructure projects for the commission’s biennial State Water Plan. The purpose of the survey is to provide the commission with an updated inventory of water projects and programs that become part of the commission’s budget request to North Dakota’s governor and legislature. The commission prioritizes the projects and publishes the rankings. The inventory does not include drinking water or wastewater infrastructure replacement needs but contains drinking water infrastructure projects not included in EPA’s assessments, such as repair or rehabilitation of dams and reservoirs. In the state’s most recent assessment, the commission estimated a cost of $645 million to address the drinking water infrastructure projects identified in the state. Tennessee. As required by state statute, the Tennessee Advisory Commission on Intergovernmental Relations annually surveys local officials on their infrastructure projects, including drinking water and wastewater infrastructure, with a capital cost of at least $50,000. The purpose of the assessment is to identify projects necessary to help local communities with economic development opportunities. To be included in the Commission’s report, projects must be in the conceptual, planning and design, or construction phase at some time during the next 5 years and need to be either started or completed during that period. In 2016, the commission estimated $3.3 billion of drinking water and wastewater infrastructure needs for July 2014 through June 2019. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts: Staff Acknowledgments In addition to the contacts named above, Susan Iott (Assistant Director), Krista Breen Anderson, Rodney Bacigalupo, Carolyn Blocker, Kevin Bray, Mark Braza, Rich Johnson, Elizabeth Luke, Jeff Malcolm, Micah McMillan, Jon Melhus, Cynthia Norris, Leslie Pollock, Max Sawicky, and Sarah Veale made key contributions to this report.
Events such as the discovery of lead in drinking water in Flint, Michigan, and the overflow and damage to the spillway at the Oroville Dam in California have drawn attention to the condition of the nation's drinking water and wastewater infrastructure. Conditions such as population growth or drought may further affect a community's needs and plans for such infrastructure. GAO was asked to review federal programs that provide funding for drinking water and wastewater infrastructure. This report describes (1) how federal agencies and selected states identify drinking water and wastewater infrastructure needs; (2) how federal agencies have supported selected states' planning for future conditions that may affect needs; and (3) the extent to which federal and state agencies have coordinated in funding projects, and any challenges they faced. GAO reviewed eight federal agencies that provide assistance for drinking water and wastewater infrastructure and selected a nongeneralizable sample of six states—Alaska, California, New Mexico, New York, North Dakota, and Tennessee—on the basis of federal infrastructure funding amounts and geography. For the six states, GAO reviewed infrastructure planning and program documents and interviewed federal and state officials. The Environmental Protection Agency (EPA) and other federal and selected state agencies collect information to identify drinking water and wastewater infrastructure needs through surveys, the administration of agency programs, and studies. EPA's most recent surveys estimated approximately $655 billion of drinking water and wastewater infrastructure needs nationwide over the next 20 years. The seven other agencies GAO reviewed—the departments of Agriculture (USDA) and Housing and Urban Development (HUD) and the Economic Development Administration, Indian Health Service, Bureau of Reclamation, U.S. Army Corps of Engineers, and Federal Emergency Management Agency (FEMA)—collect information on these needs by administering their programs. For example, the Corps collects information on congressionally authorized water projects. Of the six states GAO selected for review, all but Alaska and California had collected data on their needs such as through surveys of communities. For example, North Dakota biennially collects information on drinking water projects from its communities. The Corps, Reclamation, and FEMA provide technical assistance and funding to support efforts in the six selected states to plan for future conditions that may affect drinking water and wastewater infrastructure needs. For example, the Corps helped Minnewaukan, North Dakota, identify alternatives for reducing flood risks to the city's drinking water and wastewater infrastructure, and Reclamation worked with Santa Fe, New Mexico, to study its projected water supply and demand. The remaining five agencies have at times been involved in long-term planning but do not have established programs for such purposes. Federal and state agencies in the six selected states have taken actions to coordinate funding for projects while facing several challenges. For example, agencies in most of the selected states had established interagency coordinating groups that reached out to communities needing funding for projects. In some cases, agencies developed written agreements for their coordinating groups, with such goals as simplifying the application process and encouraging agencies to fund projects together. However, agencies in the selected states faced challenges, such as difficulty in developing a set of specific projects that were ready for funding, despite having infrastructure needs. For example, in the six selected states, USDA did not have enough applicants with projects that were developed to the extent needed to receive funding; therefore, USDA did not loan a total of about $193 million in available loan funds for fiscal years 2012 through 2016 to communities in those states. GAO found that federal and state agencies within selected states had taken some actions to help address challenges they faced in funding projects; these actions included conducting joint outreach to develop a set of projects ready for funding. EPA and USDA also have taken actions. For example, in February 2017 in response to a GAO recommendation in a prior report, EPA and USDA issued a joint memorandum outlining five practices to help improve interagency collaboration at the state level on drinking water and wastewater infrastructure projects; these practices include using common application materials and conducting joint marketing or outreach.
Background The well-being of children and families has traditionally been understood as a primary duty of state governments, and state and local governments are the primary administrators of child welfare programs designed to protect children from abuse or neglect. Child welfare caseworkers investigate allegations of child maltreatment and determine what services can be offered to stabilize and strengthen a child’s own home. If remaining in the home is not a safe option for the child—he or she may be placed in foster care while efforts to improve the home are made. In these circumstances, foster care may be provided by a family member, also known as kinship care; caregivers previously unknown to the child; or a group home or institution. In those instances where reuniting the child with his or her parents is found not to be in the best interest of the child, caseworkers must seek a new permanent home for the child, such as an adoptive home or guardianship. Some children remain in foster care until they “age out” of the child welfare system. Such children are transitioned to independent living, generally at the age of 18 years. Federal Funding for State Child Welfare Programs States use both dedicated and nondedicated federal funds for operating their child welfare programs and providing services to children and families. In fiscal year 2006, the federal government provided states with about $8 billion in dedicated child welfare funds, primarily authorized under Title IV-B and Title IV-E of the Social Security Act. (See app. II.) Nearly all of this funding is provided under Title IV-E, which provides matching funds to states for maintaining eligible children in foster care, providing subsidies to families adopting children with special needs, and for related administrative and training costs. About 9 percent of funding is provided under Title IV-B, which provides grants to states primarily for improving child welfare services, including a requirement that most funds be spent on services to preserve and support families. A significant amount of federal funding for child welfare services also comes from federal funds not specifically dedicated for child welfare— including the Temporary Assistance for Needy Families (TANF) block grant, Medicaid, and the Social Services Block Grant. These and hundreds of other federal assistance programs for children and families, including many that serve low-income populations, are listed in a centralized database administered by the General Services Administration that has a search feature by type of assistance and eligible population. The Congressional Research Service conservatively estimated that the median share of total federal child welfare spending derived from nondedicated federal funding equaled nearly half of all the federal dollars (47 percent) expended by state child welfare agencies, based on state child welfare agency data reported to the Urban Institute for state fiscal year 2002. Despite the large amount of federal funds spent on child welfare from nondedicated sources, the Congressional Research Service reported that attention to federal child welfare financing has focused almost exclusively on dedicated child welfare funding streams and is driven in part by the belief that the current structure hampers the ability of state child welfare agencies to achieve positive outcomes for children. Common charges are that the current structure does not grant states the flexibility needed to meet the needs of children and their families, and encourages states to rely too heavily on foster care. Congress authorized HHS to conduct demonstration projects whereby states were allowed to waive certain funding restrictions on the use of Title IV-B and Title IV-E funds under the condition that the flexible use of funds would be cost-neutral to the federal government. HHS reported that 24 states had participated in demonstration projects across eight child welfare program areas, such as caseworker training and services to caretakers with substance abuse disorders. States were required to conduct an evaluation of project success in terms of both improving children and family outcomes and cost neutrality. HHS Child and Family Services Reviews and Technical Assistance As Congress authorized funds for state child welfare programs, it has also required states to enact policies and meet certain standards related to those programs. HHS evaluates how well state child welfare systems achieve federal standards for children through its child and family services reviews. The CFSR process begins with a state assessment of its efforts, followed by an on-site review by an HHS team that interviews various stakeholders in the child welfare system and usually reviews a total of 50 child welfare case files for compliance with federal requirements. After receiving the team’s assessment and findings, the state develops a program improvement plan (PIP) to address any areas identified as not in substantial conformity. Once HHS approves the PIP, states are required to submit quarterly progress reports. Pursuant to CFSR regulations, federal child welfare funds can be withheld if states do not show adequate PIP progress, but these penalties are suspended during the PIP implementation term. HHS provides training and technical assistance to help states develop and implement their PIPs through its training and technical assistance network. This training and technical assistance focuses on building state agency capacity and improving the state child welfare system. Technical assistance providers in this network include HHS’s Children’s Bureau and regional offices, as well as NRCs and the department’s Child Welfare Information Gateway. States Identified Several Long-standing and Emerging Challenges to Ensuring Child Safety, Well-Being, and Permanency State child welfare agencies identified three primary challenges as the most important to resolve to improve outcomes for children under their supervision: providing an adequate level of services for children and families, recruiting and retaining caseworkers, and finding appropriate homes for children. HHS, GAO, and child welfare organizations have consistently shown these issues to be long-standing challenges for most states. In addition, state officials identified three challenges of increasing concern: children’s exposure to illegal drugs; increased demand to provide services for children with special needs, such as those with developmental disabilities; and changing demographic trends or needs for cultural sensitivity for some groups of children in care and their families. Long-standing Challenges Include Providing Adequate Services, Recruitment and Retention of Caseworkers, and Placement Issues In responding to our survey, states most frequently identified the following three child welfare challenges as the most important to resolve in order to improve the safety, permanency, and well-being of children under states’ care: providing adequate services to children and families, recruiting and retaining caseworkers, and finding appropriate homes for children. (See fig. 1.) GAO and child welfare organizations have previously reported on the long- standing nature of these challenges. For example, GAO previously reported that gaps in the availability and access to services delayed states’ ability to file for a petition to terminate parental rights—a necessary step in obtaining a permanent home for children who cannot live with their parents—because parents were unable to obtain timely access to substance abuse treatment and other services, such as mental health services and housing. GAO and other organizations have also previously reported that public and private child welfare agencies face a number of challenges recruiting and retaining qualified caseworkers and supervisors. For example, we reported that high caseloads, poor supervision, and the burden of administrative responsibilities have, in some cases, prompted caseworkers to voluntarily leave their employment with child welfare agencies. We also reported difficulties in recruiting adoptive parents for children with special needs. The most important challenges identified by state child welfare agencies are consistent with HHS’s CFSR findings and states’ self-assessments of their programs. For example, according to the Congressional Research Service, HHS reviewers found that 43 states needed improvement in providing accessible services to children and at-risk families in all jurisdictions of the state and 31 states needed improvement in conducting diligent recruitment of foster and adoptive parents. The number of states needing improvement in performance indicators related to child welfare services, recruitment and retention of caseworkers, and placement of children in appropriate homes is shown in table 1. A Large Array of Specific Services Needed by Children and Families, Especially in the Area of Mental Health and Substance Abuse, Underlie the Challenge State child welfare agencies identified specific services underlying their challenge to serve children and families, citing constraints on federal funding and limited awareness of services among eligible families as contributing factors. Regarding children, more than half of states reported that they were dissatisfied with the level of mental health services, substance abuse services, housing for foster youth transitioning to independence, and dental care. (See fig. 2.) States also reported that they were dissatisfied with the level of services provided to at-risk families in the child welfare system. These services are needed to help prevent the removal of children from their homes or help facilitate the reunification of children with their parents after removal. Specifically, more than half of states responded that they were dissatisfied with mental health services, substance abuse services, transportation services, and housing for parents of at-risk families. (See fig. 3.) For some types of services, states expressed more dissatisfaction with services available to at-risk families than with services available to children. For example, more states reported dissatisfaction with the level of at-risk family services than with children’s services in the areas of assessment of their service needs, legal services, and advocacy or case management. (See fig. 4.) States we visited reported that funding constraints were among the reasons maintaining an adequate level of services was difficult. For example, while maintenance payments to foster families for children under state care is provided as an open-ended entitlement for federal funding under Title IV-E, federal funding for family support services is capped at a much lower level under Title IV-B. In addition, because the proportion of children in foster care who are eligible for federal support has been declining, states had to provide a greater share of funding at a time when many states were experiencing budget deficits that adversely affected overall funding for social services. In prioritizing funding needs, child welfare officials in 40 states responding to our survey reported that family support services, such as those that could prevent removal of a child or help with reunification of a family, were the services most in need of greater federal, state, or local resources. Officials from 29 states responded that child protective services such as investigation, assessment of the need for services, and monitoring were next in need of additional resources. Officials in a state we visited indicated that some caseworkers and families may be unaware of the array of existing services offered by numerous public and private providers. In North Carolina, for example, state officials reported that about 70 percent of children and families in the child welfare system received services from multiple public agencies, and the CFDA—a repository of information on all federal assistance programs that is periodically updated—lists over 300 federal programs that provide youth and family services. However, caseworkers and families are not always aware of the range of services that are available to support them, and child welfare officials cited the need for additional information to help link children and families with needed services. In October 2003, the White House Task Force for Disadvantaged Youth recommended that the CFDA be modified to provide a search feature linked to locations where federally funded programs were operating. A similar model may be found on an HHS Web link, http://ask.hrsa.gov/pc/, where users can enter a ZIP code to find the closest community health center locations offering medical, mental, dental, and other health services on a sliding fee scale. Large Caseloads, Administrative Burden, and the Effectiveness of Supervision Underlie the Caseworker Recruitment and Retention Challenge State child welfare officials most frequently reported dissatisfaction with the current status of three underlying factors that affect the state’s ability to recruit and retain caseworkers. Specifically, more than half of the states reported dissatisfaction with the average number of cases per worker, administrative responsibilities of caseworkers, and effectiveness of caseworker supervision. (See fig. 5.) Child welfare officials in each of the states we visited reported having trouble recruiting and retaining caseworkers because many caseworkers are overwhelmed by large caseloads. According to a 2006 Child Welfare League of America (CWLA) report, some programs lack caseload standards that reflect time needed to investigate allegations of child maltreatment, visit children and families, and perform administrative responsibilities. The report also cites CWLA’s caseload standards of no more than 12 cases per caseworker investigating allegations of child maltreatment, and no more than 15 cases for caseworkers responsible for children in foster care. However, according to the report, most states, average caseloads in some areas are often more than double the CWLA standards. State child welfare officials we interviewed also reported that increasing amounts of time spent on administrative duties made it difficult to recruit and retain staff and limiting the amount of time caseworkers could spend visiting families. For example, child welfare officials in three states we visited estimated that some caseworkers spent a significant amount of time on administrative duties such as entering case data in automated systems, completing forms, and providing informational reports to other agencies. This administrative burden has limited caseworker ability to ensure timely investigations of child maltreatment and to make related decisions concerning the removal of children from their homes, according to officials, and influenced caseworker decisions to seek other types of employment. Some states we visited reported that the lack of effective supervision also adversely affected staff retention and sometimes resulted in delays providing appropriate services to children and families. Lack of supervisory support was cited as a problem in terms of supervisor inexperience and inaccessibility. For example, a Texas state official said that because of high turnover, caseworkers are quickly promoted to supervisory positions, with the result that the caseworkers they supervise complain of poor management and insufficient support. In Arizona, caseworkers have expressed dissatisfaction in the support they received from their supervisors, and this has negatively affected recruitment and retention. Child welfare officials reported that lack of access to supervisors was frustrating to caseworkers because it delayed their ability to specify appropriate permanency goals for children and to develop case plans to meet the needs of children and families in their care. Recruiting and Retaining Foster Parents for All Kinds of Children, but Especially for Children Who Are Older or Have Special Needs, Are Some of the Underlying Placement Challenges for States Relative to other challenges, state child welfare officials most frequently identified four factors underlying the challenge to find appropriate homes for children. (See fig. 6.) Recruiting and retaining foster parents and serving children with special needs were at the top of the list. Also, more than half of the states reported that finding homes for children with special needs, older youth, and youth transitioning into independent living, and finding and supporting kinship homes, were among their greatest concerns. Child welfare officials in two states we visited said that the lack of therapeutic foster care homes that can properly care for children who have significant physical, mental, or emotional needs makes it challenging to find them an appropriate home. In addition, these officials said that some of the existing facilities are inappropriate for child placement because they are old and in poor condition or provide outmoded treatment services. Because of the absence of high-quality therapeutic settings, child welfare officials said that it has become increasingly difficult to place children in homes that can appropriately address their individual needs. Recruiting and retaining foster and adoptive parents has become an increasingly difficult aspect of placement for a variety of reasons, such as the lack of a racially and ethnically diverse pool of potential foster and adoptive parents, and inadequate financial support. For example, child welfare officials said that some locations have relatively small populations of certain races and ethnicities, making it difficult to recruit diverse foster and adoptive parents. Inadequate financial support also hinders recruiting and retaining foster and adoptive families. Financial support for foster and adoptive families varies widely among states and local areas, and may not keep up with inflation. According to a California child advocacy organization, for example, the state’s payments to foster parents of $450 per month per child have not been adjusted for inflation since 2001. As a result, according to the organization, the supply of foster care providers has not increased markedly during this time. Obtaining permanent homes for older youth and for youth aging out of foster care is a continuing placement challenge for states. For example, Texas child welfare officials said that it is difficult to place adolescents with adoptive parents because older youth can choose not to be adopted. Finding housing for youth transitioning into independence also can be difficult in high-cost areas or in areas where special arrangements have not been made with housing agencies and landlords that typically require a cosigner on the rental application or a large deposit before moving in. More than half of the states also reported that limitations in their ability to identify and support placements with family members or legal guardians limited opportunities to place children in appropriate homes. For example, child welfare officials in Ohio reported a lack of resources to conduct outreach to family members that may be able to provide a stable home for children in foster care with less disruption to the child. Michigan officials also reported that the lack of financial resources made it difficult for the state to meet its placement goals for those children who had been removed from their home and who had been directed by the court to be placed with other family members. Emerging Challenges Include Children’s Exposure to Illegal Drugs, Caring for Special Needs Children, and Responding to Changing Demographics of the Child Welfare Population While states have experienced child welfare challenges for many years, states identified several emerging issues that are of increasing concern because of their impact on the well-being of children in the child welfare system. Most states reported a high likelihood that three issues will affect their systems over the next 5 years: children’s exposure to illegal drugs, caring for special or high-needs children, and changing demographics and cultural sensitivities. (See fig. 7.) Although the overall percentage of drug-related child welfare cases has not increased, officials in the states we visited reported that the type and location of drug abuse underlying maltreatment cases is changing, requiring increased attention by child welfare agencies in certain areas. For example, child welfare officials reported an increasing number of children entering state care as a result of methamphetamine use by parents, primarily in rural areas. Child welfare agencies in these areas may need to train caseworkers on how this drug is likely to affect parents or caregivers who use it in order to safely investigate and remove children from homes, as well as assess the service needs of affected families to develop an appropriate case plan. State child welfare officials in all five states we visited said that finding homes for special needs children is a growing issue because it is hard to find parents willing to foster or adopt these children and who live near the types of services required to meet the children’s needs. For example, child welfare officials in one of the states we visited reported that the state does not have a sufficient number of adoptive homes for children with special needs. As a result, these children generally stay in foster care for longer periods of time. Child welfare officials we interviewed also said that the growing cultural diversity of the families who come in contact with the child welfare system has prompted the need for states to reevaluate how they investigate allegations of maltreatment and the basis on which they make decisions that could result in the removal of children from their homes. Child welfare officials in several states reported that the current protocols for investigating and removing children from their homes do not necessarily reflect the cultural norms of some immigrant and other minority families. These differences include limitations in family functioning that may be caused by poverty, the environment, or culture as opposed to those that may be due to unhealthy family conditions or behaviors. In response to growing cultural diversity, several states we visited stated that they are revising their protocols to account for religious and language differences among families who come in contact with the child welfare system. State Initiatives Insufficiently Address State Challenges to Improve Child Outcomes, and Evaluations Showed Mixed Results Most states reported that they had implemented initiatives since January 2002 to address challenges associated with maintaining an adequate level of services, recruiting and retaining caseworkers, and finding appropriate homes for children. However, these initiatives did not address all of the key factors states reported being associated with these challenges. In states where evaluations of their initiatives had been completed under a federal demonstration project, the evaluations generally showed that states had achieved mixed results across child welfare outcomes. State Initiatives Did Not Address All the Key Factors Related to the Three Challenges Cited as Most Important to Improve Child Outcomes States reported implementing various initiatives to improve child outcomes, but these initiatives did not always mirror those factors states reported as most necessary to address in overcoming their primary challenges. For example, with respect to services, states most frequently identified that they were challenged by the lack of mental health and substance abuse services for children and families, yet only a fourth of the 32 states dissatisfied with these services reported having initiatives to improve the level of these services. (See fig. 8.) This may be because these services are typically provided outside the child welfare system by other agencies. About half of the states reporting dissatisfaction also reported initiatives to improve collaboration with other agencies. Similarly, most states reported that they had implemented initiatives to improve recruitment and retention of child welfare caseworkers, but states reported little or no action to address two of the most frequently reported factors underlying this challenge. (See fig. 9.) While most states reported dissatisfaction with the supervision of caseworkers, only two reported specific initiatives to address this challenge. Similarly, while over half of the states reported dissatisfaction with the administrative responsibilities of caseworkers, no state reported an initiative to address this challenge. One way of streamlining administrative responsibilities— through new technology—may be difficult for many states because nearly half of the states reported that they did not have an operational statewide automated child welfare information system. Almost all states reported implementing initiatives to improve their ability to find appropriate homes for children, but few states addressed two of the three most frequently reported factors underlying this challenge (see fig. 10). For example, three states reported initiatives to find appropriate homes for older youth transitioning to independence and four states reported initiatives to find appropriate homes for children with special needs. States implementing initiatives under federal demonstration projects were required to conduct evaluations, and these evaluations showed mixed results. In general, the demonstration projects offered states the flexibility to use federal funding under Title IV-B and Title IV-E in eight different program areas in an effort to improve services and placements— addressing the three primary challenges reported by states (see app. III). As of 2006, 24 states had implement 38 child welfare waiver demonstrations. However, evaluation results were mixed across child welfare outcomes. For example, while Illinois found strong statistical support for the finding that funding for assisted guardianships increased attainment of permanent living arrangements, none of the other four reporting states found similar conclusive evidence of this finding. Similarly, among four states using Title IV-E funds to fund services and supports for caregivers with substance abuse disorders, Illinois was the only state that demonstrated success in connecting caregivers to treatment services. States can no longer apply for participation in federal demonstration projects because the program authorization expired in March 2006. States Generally Found HHS Reviews and Technical Assistance Helpful, but HHS’s Monitoring System Has Limitations States we interviewed reported that HHS’s CFSR and technical assistance efforts were helpful in implementing federal child welfare requirements. Similarly, nearly all states in our survey reported that HHS-sponsored technical assistance was helpful to some degree. However, HHS officials said that limitations in their technical assistance tracking system made it difficult to maximize its use as a management tool. HHS Child and Family Services Reviews Helped States Assess Needs and Make Improvements State child welfare officials generally reported that HHS’s CFSR reviews have assisted them in assessing their efforts and ability to achieve the safety, permanence, and well-being for the children under their care and develop the necessary program improvement plans to meet federal requirements in this regard. Specifically, state officials responding to our survey reported that the reviews had helped them to implement system wide child welfare reform, improve their quality assurance systems, and increase their collaboration with other child welfare-related agencies. Additionally, child welfare officials in three of the five states we visited reported that the reviews prompted them to develop interagency strategies for providing an array of needed services, such as mental health services and education for children and families. Most states reported that there was not much need to improve the usefulness of the CFSR process to help the state ensure safety, permanence, and well-being of children in the child welfare system, but some state officials expressed concern about the outcome measures used. Of the 48 states responding to our survey question about the CFSR process, 33 states reported that the usefulness of the CFSR process needed little to none or some improvement. Some state officials we interviewed were concerned, however, that the outcomes being measured in the reviews may not accurately reflect their child welfare program performance. In addition, officials in three of the five of the states we visited expressed concern about the small number of sample cases used by the reviewers to evaluate their state’s performance. Specifically, officials in one state reported that evaluating only 50 cases left the state with uncertainty about how pervasive problems are in the state and what its priority areas should be. Although the first round of HHS’s reviews showed that no state had reached substantial conformity on all of the federal outcome goals for state child welfare systems, HHS officials said that states had made progress in implementing federal requirements and improving their child welfare systems. For example, HHS officials said that the quality of data has improved because states have put a greater focus on having accurate and reliable data and many states are examining their data in greater detail than before in an effort to identify problems in their child welfare systems and to figure out how to meet the CFSR requirements. The next round of reviews is scheduled to begin at the end of fiscal year 2006, when HHS officials will once again measure states’ progress in meeting federal child welfare requirements. States Generally Viewed Federal Technical Assistance as Helpful, but HHS’s Monitoring System Has Limitations as a Management Tool Nearly all states reported in our survey that the federal technical assistance they received to improve their child welfare programs was helpful to some degree, although some resources were given higher ratings than others, as shown in table 2. States generally reported the highest levels of satisfaction with assistance provided by two of HHS’s national resource centers that had primary responsibility for helping with child protective service and organizational improvement. The federal resources providing technical assistance in the areas of substance abuse, community-based child abuse prevention, and abandoned infants received the fewest requests from states. HHS’s Technical Assistance Tracking Internet System (TATIS) monitors federal training and technical assistance requested and provided to states, but several limitations that hinder its use as a management tool. One limitation is that the system was designed to capture only assistance provided by eight NRCs. (See app. IV.) Because TATIS does not capture training and technical assistance provided by the remaining three NRCs, other federal resource centers, and HHS’s regional offices, HHS officials do not have a complete picture of the assistance requested by states and provided to them. For example, the NRC for substance abuse is not required to enter data into TATIS, but NRC records show that it provided 47 on-site technical assistance visits to 16 states in fiscal year 2005, making it among one of the most frequent providers of on-site federal assistance. A second limitation is that the eight NRCs do not always enter information into TATIS as required, raising concerns about the ability of HHS to determine how often states use its various resources and for what purpose. For example, an official from one of the eight NRCs we interviewed said that his center is not as conscientious as it should be about entering all of the required data into TATIS. HHS officials said that without this information, it is difficult to determine how best to allocate technical assistance resources to help maximize states’ ability to address child welfare issues. Conclusions States have been facing some of the same child welfare challenges for many years, and predict that some emerging challenges will have impacts in the next several years. The federal government has funded hundreds of programs to meet families’ mental health, substance abuse treatment, and other social service needs that could help prevent child maltreatment and keep families together. However, the inability to query the federal government’s central source of information—the CFDA—to identify which services across program and agencies are available in various locations makes it difficult to determine the extent of services available at the local level to serve children and families in the child welfare system. HHS has provided state child welfare systems an array of training and technical assistance that states report as helpful for improving their child welfare programs. Maximizing the value of its training and technical assistance is compromised, however, because HHS’s information system does not capture all training and technical assistance provided to states from various HHS-sponsored providers, and compliance with the reporting requirements has not been enforced. In the absence of complete and timely information, HHS may be limited in its ability to determine how best to allocate technical assistance resources to help maximize states’ ability to address child welfare issues. Recommendations for Executive Action We are making the following three recommendations to the Secretary of Health and Human Services for improving awareness of and access to various social services, and improving the department’s ability to manage technical assistance provided to state child welfare agencies. Develop a strategy to centralize information on federal assistance programs that are available to meet child welfare program and service needs and that can be accessed by state and local child welfare staff and providers. This strategy could follow a previous Administration recommendation to develop an Internet-based search for services through the Catalog of Federal Domestic Assistance (CFDA) that is linked to grantees by ZIP code. Require all HHS technical assistance providers, including HHS regional offices and all national resource centers, to enter training and technical assistance data into the department’s Technical Assistance Tracking Internet System. Establish policies and procedures to ensure that complete and accurate data are timely reported to the Technical Assistance Tracking Internet System. Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHS’s written comments are reprinted in appendix V, and the Department’s technical comment was incorporated into the report. In its written comments, HHS stated that the report substantially supports many of the findings of the CFSRs, and agreed with one of our three recommendations. The department agreed with our recommendation to establish policies and procedures to ensure complete and accurate reporting of data into TATIS and said it intended to provide written guidance to the resource centers requiring this reporting. However, the department stated that the report misconstrued the intent of the CFSRs and that the remaining recommendations do not adequately match the articulated needs of state welfare agencies. HHS disagreed with GAO’s reference that no state had achieved all of the federal outcome measures for ensuring the safety, well-being, and permanency of children. The department stated that it makes separate determinations regarding substantial conformity for each of the seven outcome measures and each of the seven systemic factors reviewed. We revised the text to reflect that no state had reached substantial conformity on all of the federal outcome goals for state child welfare systems in lieu of stating that no state had achieved all of the federal performance goals. HHS disagreed with our recommendation to increase awareness of federal assistance programs by modifying the CFDA, stating that it was misleading to assume that state challenges could be significantly met or appreciably altered by a list of resources, in part because the recommendation incorrectly implies that local child welfare agencies are not aware of many valuable services; underestimates the substantive knowledge of resources currently being utilized by caseworkers; child welfare staff need access to actual services or service providers rather than general information on federal assistance programs; resource lists quickly become outdated with state and county programs and service providers changing annually based on their budgets; and certain federal programs are designed to meet the needs of very specific, and sometimes very small, populations. We acknowledge that increasing awareness of available federal resources is not the only action needed to address the various challenges facing state child welfare agencies, but believe that caseworker awareness and referral of children and families to existing resources is an important first step in meeting the challenge to provide an adequate level of services to them. As our report states, our current and past work has found that some caseworkers were unaware of the full array of federal resources, such as health and housing services, available in their locales, or had not coordinated with other agencies and organizations to access them. We continue to support the view that modifying the CFDA would allow caseworkers and others to more easily identify services and service providers funded by federal agencies in closest proximity to the families they serve. As the department points out, modifying the CFDA would not address issues related to outdated listings of state or local resources; however, the CFDA is updated biweekly or more often in response to new or changing information regarding federal assistance. Further, while it is true that some federal programs target specific populations, these populations are often low-income or minority groups that are also served by the child welfare system. The department also disagreed with our recommendation to require all HHS technical assistance providers to enter data into TATIS, stating that the system was not designed to monitor all technical assistance provided to states, nor would it be an effective stand-alone mechanism to determine how best to allocate technical assistance resources to states; the recommendation does not give sufficient weight to the CFSR process; including training and technical assistance by regional offices in TATIS would be superfluous as these activities are in regional office job descriptions; and the recommendation does not recognize that training and technical assistance is provided to a variety of audiences beyond the state child welfare agencies, and including more information would confuse the tracking of technical assistance. Our report recognizes that TATIS was designed to monitor on-site training and technical assistance provided by 8 of the 11 resource centers. However, we continue to believe that expanding TATIS to capture the substantial on-site assistance provided by the remaining resource centers and other HHS providers would enhance its contribution to the department in determining how best to allocate training and technical assistance resources to states. We acknowledge the benefit of the CFSRs in identifying states’ technical assistance needs. However, state implementation of program improvement plans in response to the CFSR findings is only a part of training and technical assistance requested and provided to states. In addition, while regional office job descriptions may include training and technical assistance responsibilities, we do not believe that capturing the amount and type of this assistance actually provided to states would be superfluous, but rather provide a more complete picture of the on-site assistance received by states. Further, our recommendation was not intended to include training and technical assistance provided to audiences beyond the state child welfare agencies, and we modified the report text to clarify this point. Copies of this report are being sent to the Secretary of Health and Human Services, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other contacts and major contributors are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology We were asked to examine (1) the primary challenges state child welfare agencies face in their efforts to ensure the safety, well-being, and permanency of the children under their supervision; (2) the changes states have made since January 1, 2002, to improve the outcomes for children in the child welfare system; and (3) the extent to which states participating in the Department of Health and Human Services (HHS) Child and Family Services Reviews (CFSR) and technical assistance efforts find the assistance to be helpful. As part of this work, GAO also examined the extent to which states had developed written child welfare disaster plans for dealing with the dispersion of children under state care to other counties or states, because of disasters. In July 2006, GAO issued the report Child Welfare: Federal Action Needed to Ensure States Have Plans to Safeguard Children in the Child Welfare System Displaced by Disasters (GAO-06-944) in response to the disaster planning part of your request. To learn more about these objectives, we conducted a Web-based survey of state child welfare directors and conducted site visits in five states where we interviewed state officials. We also interviewed federal child welfare officials and representatives from national child welfare organizations concerning state child welfare programs, the changes that states had made since 2002 to improve the outcomes for children, and the extent to which states participated in HHS’s CFSR and technical assistance efforts. In addition, we reviewed several national studies and our previous child welfare reports to determine the challenges that states face in their efforts to ensure the safety, well-being, and permanency of the children under their supervision. Finally, we analyzed agency documentation, legislation, and other documentation related to child welfare programs and requirements. We conducted our work between October 2005 and August 2006 in accordance with generally accepted government auditing standards. Web-Based Survey To obtain state perspectives on our objectives and the relative priority state child welfare agencies place on the challenges they face, we conducted a Web-based survey of child welfare directors in the 50 states, the District of Columbia, and Puerto Rico. The survey was conducted using a self-administered electronic questionnaire posted on the Web. We contacted directors via e-mail announcing the survey and sent follow-up e- mails to encourage responses. The survey data were collected between February and May 2006. We received completed surveys from 48 states, the District of Columbia, and Puerto Rico (a 96 percent response rate). The states of Massachusetts and Nebraska did not return completed surveys. To develop the survey questions, we reviewed several national studies and our previous child welfare reports to determine the challenges that states face in their efforts to ensure the safety, well-being, and permanency of the children under their supervision. We analyzed agency documentation to identify HHS’s oversight and technical assistance efforts. In November 2005, we also held two discussion groups with representatives from child welfare stakeholder groups to identify any additional issues that may not be covered in the published documents we reviewed. The stakeholders included representatives from the Association of Administrators of the Interstate Compact on the Placement of Children, the Child Welfare League of America, the National Association of Public Child Welfare Administrators, the AARP Grandparent Information Center, the Pew Commission on Children in Foster Care, the Urban Institute, American Bar Association Center on Children and the Law, the Center for the Study of Social Policy, the American Public Human Services Association, and Casey Family Services. We worked to develop the questionnaire with social science survey specialists. Because these were not sample surveys, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or how the data are entered into a database can introduce unwanted variability into the survey results. We took steps in the development of the questionnaires, the data collection, and data analysis to minimize these nonsampling errors. For example, prior to administering the survey, we pretested the content and format of the questionnaire with several states to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. We made changes to the content and format of the final questionnaire based on pretest results. In that these were Web-based surveys in which respondents entered their responses directly into our database, there was a reduced possibility of data entry error. We also performed computer analyses to identify inconsistencies in responses and other indications of error. In addition, an independent analyst verified that the computer programs used to analyze the data were written correctly. Site Visits We visited five states—California, New York, North Carolina, Texas, and Utah. We selected these states because they represent different types of program administration (state-administered, state-supervised and county- administered, state- and county-administered), the predominance of urban or rural characteristics, the achievement of child welfare standards on the CFSR, changes in the number of children reported to be in foster care, and are geographically diverse. During these visits, we interviewed state child welfare officials and collected relevant state agency policies and procedures and reports. Information that we gathered on our site visits represents only the conditions present in the states and local areas at the time of our site visits. We cannot comment on any changes that may have occurred after our fieldwork was completed. Furthermore, our fieldwork focused on in- depth analysis of only a few selected states. On the basis of our site visit information, we cannot generalize our findings beyond the states we visited. Appendix II: Federal Funding for State Child Welfare Programs Final funding (in millions of dollars) Foster care—Open-ended reimbursement of eligible state claims for maintaining children in foster care and for related administrative and training costs Adoption assistance—Open-ended reimbursement of eligible state claims for providing subsidies to special needs adoptees and for related administrative and training costs Foster care independence—Formula grants to states for provision of independent living services to youth expected to age out of foster care and to youth who have aged out of care Education and training vouchers—Formula grants to states to provide education and training vouchers to youth who have aged out of foster care Adoption incentives—Bonus funds to states that increase the number of foster children adopted Promoting safe and stable families—Formula grants to states for four kinds of services: family preservation, family support, time-limited reunification, and adoption promotion and support Child welfare services—Formula grants to states to improve public child welfare services Court improvement—Formula grants to states’ highest courts to strengthen handling of court child welfare proceedings Child welfare training—Competitive grants to private nonprofit institutions of higher education to develop and improve education and training programs for child welfare workers Child Abuse Prevention and Treatment Act (CAPTA) Final funding (in millions of dollars) Appendix III: Type, Description, and Status of Title IV-E Waiver Demonstration Programs, as of August 2006 Appendix III: Type, Description, and Status of Title IV-E Waiver Demonstration Programs, as of August 2006 Status of demonstration by state Illinois (2008) Delaware (2002) North Carolina, Oregon (2009) Maryland (2004) Indiana (2005) North Carolina, Ohio, Oregon (2009) New Hampshire (2005) Delaware (2002) Maryland (2002) Michigan (2003) Colorado, Washington (2003) Connecticut, Maryland (2002) California (2005) Mississippi (2004) Maine (2004) Appendix IV: Department of Health and Human Services Child Welfare National Resource Centers and whether They Are Included in the Technical Assistance Tracking Internet System Database National Resource Centers Included In the Technical Assistance Tracking Internet System Database (TATIS) Assists with strategic planning, CFSRs, outcome evaluation and workforce training and development; facilitates the involvement of stakeholders; and monitors the technical assistance progress. Works to build state and local CPS capacity, assists in determining eligibility for the Child Abuse Prevention and Treatment Act (CAPTA) grant, and provides support to state liaison officers. Provides states legal and judicial issue analysis for the CFSRs, assists in action planning and implementation of program improvement plans. Provides assistance through all stages of the CFSRs, emphasizes family-centered principles and practices, and builds knowledge of foster care issues. Provides support for technical issues, conducts data use and management training, and helps in preparation and use of state data profiles. Analyzes adoption and permanency options, provides support for increasing cultural competency, and examines systematic problems and solutions. Supports youth participation in child welfare policy, program development and planning, offers assistance for foster care independence and education voucher program implementation. Provides training and technical assistance on quality recruitment and retention services for foster and adoptive families. National resource centers not included In TATIS National Center on Substance Abuse and Child Welfare (cosponsored with the Substance Abuse and Mental Health Services Administration) Works to develop knowledge and provides assistance to child welfare agencies on substance abuse related disorders in the child welfare and family court systems. Works to enhance the quality of social and health services for children abandoned because of the presence of drugs or HIV in the family. Focuses on primary child abuse and neglect prevention, assists in implementation for family support strategies. Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contacts and Staff Acknowledgments Staff Acknowledgments Cindy Ayers (Assistant Director) and Arthur T. Merriam Jr. (Analyst-in- Charge) managed all aspects of the assignment. Mark E. Ward made significant contributions to this report, in all aspects of the work. Christopher T. Langford and Kathleen L. Boggs analyzed the results of the GAO survey of child welfare challenges and assisted in the report development. In addition, Carolyn M. Taylor contributed to the initial design of the engagement, Carolyn Boyce provided technical support in design and methodology, survey research, and statistical analysis; James Rebbe provided legal support; and Charles Willson assisted in the message and report development. Related GAO Products Child Welfare: Federal Action Needed to Ensure States Have Plans to Safeguard Children in the Child Welfare System Displaced by Disasters. GAO-06-944. Washington, D.C.: July 28, 2006. Foster Care and Adoption Assistance: Federal Oversight Needed to Safeguard Funds and Ensure Consistent Support for States’ Administrative Costs. GAO-06-649. Washington, D.C.: June 15, 2006. Child Welfare: Federal Oversight of State IV-B Activities Could Inform Action Needed to Improve Services to Families and Statutory Compliance. GAO-06-787T. Washington, D.C.: May 23, 2006. Indian Child Welfare Act: Existing Information on Implementation Issues Could Be Used to Target Guidance and Assistance to States. GAO-05-290. Washington, D.C.: April 4, 2005. Foster Youth: HHS Actions Could Improve Coordination of Services and Monitoring of States’ Independent Living Programs. GAO-05-25. Washington, D.C.: November 18, 2004. D.C. Child and Family Services Agency: More Focus Needed on Human Capital Management Issues for Caseworkers and Foster Parent Recruitment and Retention. GAO-04-1017. Washington, D.C.: September 24, 2004. Child and Family Services Reviews: States and HHS Face Challenges in Assessing and Improving State Performance. GAO-04-781T. Washington, D.C.: May 13, 2004. D.C. Family Court: Operations and Case Management Have Improved, but Critical Issues Remain. GAO-04-685T. Washington, D.C.: April 23, 2004. Child and Family Services Reviews: Better Use of Data and Improved Guidance Could Enhance HHS’s Oversight of State Performance. GAO-04-333. Washington, D.C.: April 20, 2004. Child Welfare: Improved Federal Oversight Could Assist States in Overcoming Key Challenges. GAO-04-418T. Washington, D.C.: January 28, 2004. D.C. Family Court: Progress Has Been Made in Implementing Its Transition. GAO-04-234. Washington, D.C.: January 6, 2004. Child Welfare: States Face Challenges in Developing Information Systems and Reporting Reliable Child Welfare Data. GAO-04-267T. Washington, D.C.: November 19, 2003. Child Welfare: Enhanced Federal Oversight of Title IV-B Could Provide States Additional Information to Improve Services. GAO-03-956. Washington, D.C.: September 12, 2003. Child Welfare: Most States Are Developing Statewide Information Systems, but the Reliability of Child Welfare Data Could Be Improved. GAO-03-809. Washington, D.C.: July 31, 2003. D.C. Child and Family Services: Better Policy Implementation and Documentation of Related Activities Would Help Improve Performance. GAO-03-646. Washington, D.C.: May 27, 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services. GAO-03-397. Washington, D.C.: April 21, 2003. Foster Care: States Focusing on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-03-626T. Washington, D.C.: April 8, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS-99-32. Washington, D.C.: May 6, 1999. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/HEHS-98-182. Washington, D.C.: September 30, 1998.
Despite substantial federal and state investment, states have not been able to meet all outcome measures for children in their care. Given the complexity of the challenges that state child welfare agencies face, GAO was asked to determine (1) the primary challenges state child welfare agencies face in their efforts to ensure the safety, well-being, and permanent placement of the children under their supervision; (2) the changes states have made to improve the outcomes for children in the child welfare system; and (3) the extent to which states participating in the Department of Health and Human Services (HHS) Child and Family Services Reviews (CFSR) and technical assistance efforts find the assistance to be helpful. GAO surveyed child welfare agencies in 50 states, the District of Columbia, and Puerto Rico and visited 5 states, interviewed program officials, and reviewed laws, policies, and reports. In response to a GAO survey, state child welfare agencies identified three primary challenges as most important to resolve to improve outcomes for children under their supervision: providing an adequate level of services for children and families, recruiting and retaining caseworkers, and finding appropriate homes for certain children. State officials also identified three challenges of increasing concern over the next 5 years: children's growing exposure to illegal drugs, increased demand to provide services for children with special needs, and changing demographic trends or cultural sensitivities in providing services for some groups of children in the states' child welfare systems. Most states reported that they had implemented initiatives to address challenges associated with improving the level of services, recruiting and retaining caseworkers, and finding appropriate homes for children. These initiatives, however, did not always mirror the major challenges. For example, with respect to services, states most frequently identified that they were challenged by the lack of mental health and substance abuse services for children and families, yet only a fourth of the dissatisfied states reported having initiatives to improve the level of these services. In states where evaluations of their initiatives had been completed under a federal demonstration project, the evaluations generally showed that states had achieved mixed results across child welfare outcomes. States we visited reported that HHS reviews of their child welfare systems and training and technical assistance efforts helped them improve their child welfare programs. For example, officials in three of the five states we visited reported that the CFSRs prompted them to develop interagency strategies for providing an array of needed services to children and families. Similarly, nearly all states in our survey reported that HHS-sponsored technical assistance was helpful to some degree. However, HHS officials said that several factors limited their ability to use their technical assistance tracking system as a management tool. For example, not all service providers are included in the tracking system, and some providers inconsistently enter required data into the system. As a result, HHS may be limited in its ability to determine how best to allocate technical assistance resources to help maximize states' ability to address child welfare issues.
Background Oil—the product of the burial and transformation of biomass over the last 200 million years—has historically had no equal as an energy source for its intrinsic qualities of extractability, transportability, versatility, and cost. But the total amount of oil underground is finite, and, therefore, production will one day reach a peak and then begin to decline. Such a peak may be involuntary if supply is unable to keep up with growing demand. Alternatively, a production peak could be brought about by voluntary reductions in oil consumption before physical limits to continued supply growth kick in. Not surprisingly, concerns have arisen in recent years about the relationship between (1) the growing consumption of oil and the availability of oil reserves and (2) the impact of potentially dwindling supplies and rising prices on the world’s economy and social welfare. Following a peak in world oil production, the rate of production would eventually decrease and, necessarily, so would the rate of consumption of oil. Oil can be found and produced from a variety of sources. To date, world oil production has come almost exclusively from what are considered to be “conventional sources” of oil. While there is no universally agreed-upon definition of what is meant by conventional sources, IEA states that conventional sources can be produced using today’s mainstream technologies, compared with “nonconventional sources” that require more complex or more expensive technologies to extract, such as oil sands and oil shale. Distinguishing between conventional and nonconventional oil sources is important because the additional cost and technological challenges surrounding production of nonconventional sources make these resources more uncertain. However, this distinction is further complicated because what is considered to be a mainstream technology can change over time. For example, offshore oil deposits were considered to be a nonconventional source 50 years ago; however, today they are considered conventional. For the purpose of this report, and consistent with IEA’s classification, we define nonconventional sources as including oil sands, heavy oil deposits, and oil shale. Some oil is being produced from these nonconventional sources today. For example, in 2005 Canada produced about 1.6 million barrels per day of oil from oil sands, and Venezuelan production of extra-heavy oil for 2005 was projected to be about 600,000 barrels per day. Currently, however, production from these sources is very small compared with total world oil production. Oil Production Has Peaked in the United States and Most Other Countries Outside the Middle East According to IEA, most countries outside the Middle East have reached their peak in conventional oil production, or will do so in the near future. The United States is a case in point. Even though the United States is currently the third-largest, oil-producing nation, U.S. oil production peaked around 1970 and has been on a declining trend ever since. (See fig. 1.) Looking toward the future, EIA projects that U.S. deepwater oil production will slightly boost total U.S. production in the near term. However, this increase will end about 2016, and then U.S. production will continue to decline. Given these projections, it is clear that future increases in U.S. demand for oil will need to be fulfilled through increases in production in the rest of the world. Increasing production in other countries has to date been able to more than make up for declining U.S. production and has resulted in increasing world production. (See fig. 2.) Oil Is Critical in Satisfying the U.S. and World Demand for Energy Oil accounts for approximately one-third of all the energy used in the world. Following the record oil prices associated with the Iranian Revolution in 1979-80 and with the start of the Iran-Iraq war in 1980, there was a drop in total world oil consumption, from about 63 million barrels per day in 1980 to 59 million barrels per day in 1983. Since then, however, world consumption of petroleum products has increased, totaling about 84 million barrels per day in 2005. In the United States, consumption of petroleum products increased an average of 1.65 percent annually from 1983 to 2004, and averaged 20.6 million barrels per day in 2005, representing about one-quarter of all world consumption. EIA projects that U.S. consumption will continue to increase and will reach 27.6 million barrels per day in 2030. As figure 3 shows, the transportation sector is by far the largest U.S. consumer of petroleum, accounting for two-thirds of all U.S. consumption and relying almost entirely on petroleum to operate. Within the transportation sector, light vehicles are the largest consumers of petroleum energy, accounting for approximately 60 percent of the transportation sector’s consumption of petroleum-based energy in the United States. Figure 3 also shows that while consumption of petroleum products in other sectors has remained relatively constant or increased slightly since the early 1980s, petroleum consumption in the transportation sector has grown at a significant rate. Relationship of Supply and Demand of Oil to Oil Price The price of oil is determined in the world market and depends mainly on the balance between world demand and supply. Recent world production of oil has been running at near capacity to meet rising demand, which has put upward pressure on oil prices. Figure 4 shows that world oil prices in nominal terms—unadjusted for inflation—are higher than at any time since 1950, although when adjusted for inflation, the high prices of 2006 are still lower than were reached in the 1979-80 price run-up following the Iranian Revolution and the beginning of the Iran-Iraq war. All else being equal, oil consumption is inversely correlated with oil price, with higher oil prices inducing consumers to reduce their oil consumption. Specifically, increases in crude oil prices are reflected in the prices of products made from crude oil, including gasoline, diesel, home heating oil, and petrochemicals. The extent to which consumers are willing and able to reduce their consumption of oil in response to price increases depends on the cost of switching to activities and lifestyles that use less oil. Because there are more options available in the longer term, consumers respond more to changes in oil prices in the longer term than in the shorter term. For example, in the short term, consumers can reduce oil consumption by driving less or more slowly, but in the longer term, consumers can still take those actions, but can also buy more fuel-efficient automobiles or even move closer to where they work and thereby further reduce their oil consumption. Supply and demand, in turn, affect the type of oil that is produced. Conventional oil that is less expensive to extract using lower-cost drilling techniques will be produced when oil prices are lower. Conversely, oil that is expensive to produce because of the higher cost technologies involved may not be economical to produce at low oil prices. Producers are unlikely to turn to these more expensive oil sources unless oil prices are sustained at a high enough level to make such an enterprise profitable. Given the importance of oil in the world’s energy portfolio, as cheaper oil reserves are exhausted in the future, nations will need to make the transition to more and more expensive and difficult-to-access sources of oil to meet energy demands. Recently, for example, a large discovery of oil in the Gulf of Mexico made headlines; however, this potential wealth of oil is located at a depth of over 5 miles below sea level, a fact that adds significantly to the costs of extracting that oil. Timing of Peak Oil Production Depends on Uncertain Factors Most studies estimate that oil production will peak sometime between now and 2040, although many of these projections cover a wide range of time, including two studies for which the range extends into the next century. Key uncertainties in trying to determine the timing of peak oil are the (1) amount of oil throughout the world; (2) technological, cost, and environmental challenges to produce that oil; (3) political and investment risk factors that may affect oil exploration and production; and (4) future world demand for oil. The uncertainties related to exploration and production also make it difficult to estimate the rate of decline after the peak. Studies Predict Widely Different Dates for Peak Oil Most studies estimate that oil production will peak sometime between now and 2040, although many of these projections cover a wide range of time, including two studies for which the range extends into the next century. Figure 5 shows the estimates of studies we examined. Amount of Oil in the Ground Is Uncertain Studies that predict the timing of a peak use different estimates of how much oil remains in the ground, and these differences explain some of the wide ranges of these predictions. Estimates of how much oil remains in the ground are highly uncertain because much of these data are self- reported and unverified by independent auditors; many parts of the world have yet to be fully explored for oil; and there is no comprehensive assessment of oil reserves from nonconventional sources. This uncertainty surrounding estimates of oil resources in the ground comprises the uncertainty surrounding estimates of proven reserves as well as uncertainty surrounding expected increases in these reserves and estimated future oil discoveries. Oil and Gas Journal and World Oil, two primary sources of proven reserves estimates, compile data on proven reserves from national and private company sources. Some of this information is publicly available from oil companies that are subject to public reporting requirements—for example, information provided by companies that are publicly traded on U.S. stock exchanges that are subject to the filing requirements of U.S. federal securities laws. Information filed pursuant to these laws is subject to liability standards, and, therefore, there is a strong incentive for these companies to make sure their disclosures are complete and accurate. On the other hand, companies that are not subject to these federal securities laws, including companies wholly owned by various OPEC countries where the majority of reserves are located, are not subject to these filing requirements and their related liability standards. Some experts believe OPEC estimates of proven reserves to be inflated. For example, OPEC estimates increased sharply in the 1980s, corresponding to a change in OPEC’s quota rules that linked a member country’s production quota in part to its remaining proven reserves. In addition, many OPEC countries’ reported reserves remained relatively unchanged during the 1990s, even as they continued high levels of oil production. For example, IEA reports that reserves estimates in Kuwait were unchanged from 1991 to 2002, even though the country produced more than 8 billion barrels of oil over that period and did not make any important new oil discoveries. At a 2005 National Academy of Sciences workshop on peak oil, OPEC defended its reserves estimates as accurate. The potential unreliability of OPEC’s self- reported data is particularly problematic with respect to predicting the timing of a peak because OPEC holds most of the world’s current estimated proven oil reserves. On the basis of Oil and Gas Journal estimates as of January 2006, we found that of the approximately 1.1 trillion barrels of proven oil reserves worldwide, about 80 percent are located in the OPEC countries, compared with about 2 percent in the United States. Figure 6 shows this estimate in more detail. USGS, another primary source of reported estimates, provides oil resources estimates, which are different from proved reserves estimates. Oil resources estimates are significantly higher because they estimate the world’s total oil resource base, rather than just what is now proven to be economically producible. USGS estimates of the resource base include past production and current reserves as well as the potential for future increases in current conventional oil reserves—often referred to as reserves growth—and the amount of estimated conventional oil that has the potential to be added to these reserves. Estimates of reserves growth and those resources that have the potential to be added to oil reserves are important in determining when oil production may peak. However, estimating these potential future reserves is complicated by the fact that many regions of the world have not been fully explored and, as a result, there is limited information. For example, in its 2000 assessment, USGS provides a mean estimate of 732 billion barrels that have the potential to be added as newly discovered conventional oil, with as much as 25 percent from the Arctic—including Greenland, Northern Canada, and the Russian portion of the Barents Sea. However, relatively little exploration has been done in this region, and there are large portions of the world where the potential for oil production exists, but where exploration has not been done. According to USGS, there is less uncertainty in regions where wells have been drilled, but even in the United States, one of the areas that has seen the greatest exploration, some areas have not been fully explored, as illustrated by the recent discovery of a potentially large oil field in the Gulf of Mexico. Limited information on oil-producing regions worldwide also leads USGS to base its estimate of reserves growth on how reserves estimates have grown in the United States. However, some experts criticize this methodology; they believe such an estimate may be too high because the U.S. experience overestimates increases in future worldwide reserves. In contrast, EIA believes the USGS estimate may be too low. In 2005, USGS released a study showing that its prediction of reserves growth has been in line with the world’s experience from 1996 to 2003. Given such controversy, uncertainty remains about this key element of estimating the amount of oil in the ground. In 2000, USGS’ most recent full assessment of the world’s key oil regions, the agency provided a range of estimates of remaining world conventional oil resources. The mean of this range was at about 2.3 trillion barrels comprising about 890 billion barrels in current reserves and 1.4 trillion barrels that have the potential to be added to oil reserves in the future. Further contributing to the uncertainty of the timing of a peak is the lack of a comprehensive assessment of oil from nonconventional sources. For example, the three key sources of oil estimates—Oil and Gas Journal, World Oil, and USGS—do not generally include oil from nonconventional sources. This is an important issue because oil from nonconventional sources is thought to exist in large quantities. For example, IEA believes that oil from nonconventional sources—composed primarily of Canadian oil sands, extra-heavy oil deposits in Venezuela, and oil shale in the United States—could account for as much as 7 trillion barrels of oil, which could greatly delay the onset of a peak in production. However, IEA also points out that the amount of this nonconventional oil that will eventually be produced is highly uncertain, which is a result of the challenges facing this production. Despite this uncertainty, USGS experts noted that Canadian oil sands and Venezuelan extra-heavy oil production are under way now and also suggested that proven reserves from these sources will be growing considerably in the immediate future. Uncertainty Remains about How Much Oil Can Be Produced from Proven Reserves, Hard-to-Reach Locations, and Nonconventional Sources It is also difficult to project the timing of a peak in oil production because technological, cost, and environmental challenges make it unclear how much oil can ultimately be recovered from (1) proven reserves, (2) hard- to-reach locations, and (3) nonconventional sources. To increase the recovery rate from oil reserves, companies turn to enhanced oil recovery (EOR) technologies, which DOE reports has the potential to increase recovery rates from 30 to 50 percent in many locations. These technologies include injecting steam or heated water; gases, such as carbon dioxide; or chemicals into the reservoir to stimulate oil flow and allow for increased recovery. Opportunities for EOR have been most aggressively pursued in the United States, EOR technologies currently contribute approximately 12 percent to U.S. production, and carbon dioxide EOR alone is projected to have the potential to provide at least 2 million barrels per day by 2020. However, technological advances, such as better seismic and fluid-monitoring techniques for reservoirs during an EOR injection, may be required to make these techniques more cost-effective. Furthermore, EOR technologies are much costlier than the conventional production methods used for the vast majority of oil produced. Costs are higher because of the capital cost of equipment and operating costs, including the production, transportation, and injection of agents into existing fields and the additional energy costs of performing these tasks. Finally, EOR technologies have the potential to create environmental concerns associated with the additional energy required to conduct an EOR injection and the greenhouse gas emissions associated with producing that energy, although EIA has stated that these environmental costs may be less than those imposed by producing oil in previously undeveloped areas. Even if sustained high oil prices make EOR technologies cost-effective for an oil company, these challenges and costs may deter their widespread use. The timing of peak oil is also difficult to estimate because new sources of oil could be increasingly more remote and costly to exploit, including offshore production of oil in deepwater and ultra-deepwater. Worldwide, industry analysts report that deepwater (depths of 1,000 to 5,000 feet) and ultra-deepwater (5,000 to 10,000 feet) drilling efforts are concentrated offshore in Africa, Latin America, and North America, and capital expenditures for these efforts are expected to grow through at least 2011. In the United States, deepwater and ultra-deepwater drilling, primarily in the Gulf of Mexico, could reach 2.2 million barrels per day in 2016, according to EIA estimates. However, accessing and producing oil from these locations present several challenges. At deepwater depths, penetrating the earth and efficiently operating drilling equipment is difficult because of the extreme pressure and temperature. In addition, these conditions can compromise the endurance and reliability of operating equipment. Operating costs for deepwater rigs are 3.0 to 4.5 times more than operating costs for typical shallow water rigs. Capital costs, including platforms and underwater pipeline infrastructures, are also greater. Finally, deepwater and ultra-deepwater drilling efforts generally face similar environmental concerns as shallow water drilling efforts, although some deepwater operations may pose greater environmental concerns to sensitive deepwater ecosystems. It is unclear how much oil can be recovered from nonconventional sources. Recovery from these sources could delay a peak in oil production or slow the rate of decline in production after a peak. Expert sources disagree concerning the significance of the role these nonconventional sources will play in the future. DOE officials we spoke with emphasized the belief that nonconventional oil will play a significant role in the very near future as conventional oil production is unable to meet the increasing demand for oil. However, IEA estimates of oil production have conventional oil continuing to comprise almost all of production through 2030. Currently, production of oil from key nonconventional sources of oil—oil sands, heavy and extra-heavy oil deposits, and oil shale—is more costly and presents environmental challenges. Oil Sands Oil sands are deposits of bitumen, a thick, sticky form of crude oil, that is so heavy and viscous it will not flow unless heated. While most conventional crude oil flows naturally or is pumped from the ground, oil sands must be mined or recovered “in-situ,” before being converted into an upgraded crude oil that can be used by refineries to produce gasoline and diesel fuels. Alberta, Canada, contains at least 85 percent of the world’s proven oil sands reserves. In 2005, worldwide production of oil sands, largely from Alberta, contributed approximately 1.6 million barrels of oil per day, and production is projected to grow to as much as 3.5 million barrels per day by 2030. Oil sand deposits are also located domestically in Alabama, Alaska, California, Texas, and Utah. Production from oil sands, however, presents significant environmental challenges. The production process uses large amounts of natural gas, which generates greenhouse gases when burned. In addition, large-scale production of oil sands requires significant quantities of water, typically produce large quantities of contaminated wastewater, and alter the natural landscape. These challenges may ultimately limit production from this resource, even if sustained high oil prices make production profitable. Heavy and Extra-Heavy Oils Heavy and extra-heavy oils are dense, viscous oils that generally require advanced production technologies, such as EOR, and substantial processing to be converted into petroleum products. Heavy and extra- heavy oils differ in their viscosities and other physical properties, but advanced recovery techniques like EOR are required for both types of oil. Known extra-heavy oil deposits are primarily in Venezuela—almost 90 percent of the world’s proven extra-heavy oil reserves. Venezuelan production of extra-heavy oil was projected to be 600,000 barrels of oil per day in 2005 and is projected to be sustained at this rate through 2040. Heavy oil can be found in Alaska, California, and Wyoming and may exist in other countries besides the United States and Venezuela. Like production from oil sands, however, heavy oil production in the United States presents environmental challenges in its consumption of other energy sources, which contributes to greenhouse gases, and potential groundwater contamination from the injectants needed to thin the oil enough so that oil will flow through pipes. Oil Shale Oil shale is sedimentary rock containing solid bituminous materials that release petroleum-like liquids when the rock is heated. The world’s largest known oil shale deposit covers portions of Colorado, Utah, and Wyoming, but other countries, such as Australia and Morocco, also contain oil shale resources. Oil shale production is under consideration in the United States, but considerable doubts remain concerning its ultimate technical and commercial feasibility. Production from oil shale is energy-intensive, requiring other energy sources to heat the shale to about 900 to 1,000 degrees Fahrenheit to extract the oil. Furthermore, oil shale production is projected to contaminate local surface water with salts and toxics that leach from spent shale. These factors may limit the amount of oil from shale that can be produced, even if oil prices are sustained at high enough levels to offset the additional production costs. More detailed information on these technologies is provided in appendix III. Political and Investment Risk Factors Create Uncertainty about the Future Rate of Oil Exploration and Production Political and investment risk factors also could affect future oil exploration and production and, ultimately, the timing of peak oil production. These factors include changing political conditions and investment climates in many countries that have large proven oil reserves. Experts we spoke with told us that they considered these factors important in affecting future oil exploration and production. Political Conditions Create Uncertainties about Oil Exploration and Production In many countries with proven reserves, oil production could be shut down by wars, strikes, and other political events, thus reducing the flow of oil to the world market. If these events occurred repeatedly, or in many different locations, they could constrain exploration and production, resulting in a peak despite the existence of proven oil reserves. For example, according to a news account, crude oil output in Iraq dropped from 3.0 million barrels per day before the 1990 gulf war to about 2.0 million barrels per day in 2006, and a labor strike in the Venezuelan oil sector led to a drop in exports to the United States of 1.2 million barrels. Although these were isolated and temporary oil supply disruptions, if enough similar events occurred with sufficient frequency, the overall impact could constrain production capacity, thus making it impossible for supply to expand along with demand for oil. Using a measure of political risk that assesses the likelihood that events such as civil wars, coups, and labor strikes will occur in a magnitude sufficient to reduce a country’s gross domestic product (GDP) growth rate over the next 5 years, we found that four countries—Iran, Iraq, Nigeria, and Venezuela—that possess proven oil reserves greater than 10 billion barrels (high reserves) also face high levels of political risk. These four countries contain almost one-third of worldwide oil reserves. Countries with medium or high levels of political risk contained 63 percent of proven worldwide oil reserves, on the basis of Oil and Gas Journal estimates of oil reserves. (See fig. 7.) Even in the United States, political considerations may affect the rate of exploration and production. For example, restrictions imposed to protect environmental assets mean that some oil may not be produced. Interior’s Minerals Management Service estimates that approximately 76 billion barrels of oil lie in undiscovered fields offshore in the U.S. outer continental shelf. However, Congress has enacted moratoriums on drilling and exploration in this area to protect coastlines from unintended oil spills. In addition, policies on federal land use need to take into account multiple uses of the land, including environmental protection. Environmental restrictions may affect a peak in oil production by barring oil exploration and production in environmentally sensitive areas. Investment Climate Creates Uncertainty about Oil Exploration and Production Foreign investment in the oil sector could be necessary to bring oil to the world market, according to studies we reviewed and experts we consulted, but many countries have restricted foreign investment. Lack of investment could hasten a peak in oil production because the proper infrastructure might not be available to find and produce oil when needed, and because technical expertise may be lacking. The important role foreign investment plays in oil production is illustrated in Kazakhstan, where the National Commission on Energy Policy found that opening the energy sector to foreign investment in the early 1990s led to a doubling in oil production between 1998 and 2002. In addition, we found that direct foreign investment in Venezuela was strongly correlated with oil production in that country, and that when foreign investment declined between 2001 and 2004, oil production also declined. Industry officials told us that lack of technical expertise could lead to less sophisticated drilling techniques that actually reduce the ability to recover oil in more complex reservoirs. For example, according to industry officials, some Russian wells have difficulties with high water cut—that is, a high ratio of water to oil—making oil difficult to get out of the ground at current prices. This water cut problem stems from not using technically advanced methods when the wells were initially drilled. We have previously reported that the Venezuelan national oil company, PDVSA, lost technical expertise when it fired thousands of employees following a strike in 2002 and 2003. In contrast, other national oil companies, such as Saudi Aramco, are widely perceived to possess considerable technical expertise. According to our analysis, 85 percent of the world’s proven oil reserves are in countries with medium-to-high investment risk or where foreign investment is prohibited, on the basis of Oil and Gas Journal estimates of oil reserves. (See fig. 8.) For example, over one-third of the world’s proven oil reserves lie in only five countries—China, Iran, Iraq, Nigeria, and Venezuela—all of which have a high likelihood of seeing a worsening investment climate. Three countries with large oil reserves—Saudi Arabia, Kuwait, and Mexico—prohibit foreign investment in the oil sector, and most major oil-producing countries have some type of restrictions on foreign investment. Furthermore, some countries that previously allowed foreign investment, such as Russia and Venezuela, appear to be reasserting state control over the oil sector, according to DOE. Foreign investment in the oil sector also may be limited because national oil companies control the supply. Figure 9 indicates that 7 of the top 10 companies are national or state-sponsored oil and gas companies, ranked on the basis of oil production. The 3 international oil companies that are among the top 10 are BP, Exxon Mobil, and Royal Dutch Shell. National oil companies may have additional motivations for producing oil, other than meeting consumer demand. For instance, some countries use some profits from national companies to support domestic socioeconomic development, rather than focusing on continued development of oil exploration and production for worldwide consumption. Given the amount of oil controlled by national oil companies, these types of actions have the potential to result in oil production that is not optimized to respond to increases in the demand for oil. In addition, the top 8 oil companies ranked by proven oil reserves are national companies in OPEC-member countries, and OPEC decisions could affect future oil exploration and production. For example, in some cases, OPEC countries might decide to limit current production to increase prices or to preserve oil and its revenue for future generations. Figure 10 shows IEA’s projections for total world oil production through 2030 and highlights the larger role that OPEC production will play after IEA’s projected peak in non-OPEC oil production around 2010. Future World Demand for Oil Is Uncertain Uncertainty about future demand for oil—which will influence how quickly the remaining oil is used—contributes to the uncertainty about the timing of peak oil production. EIA projects that oil will continue to be a major source of energy well into the future, with world consumption of petroleum products growing to 118 million barrels per day by 2030. Figure 11 shows world petroleum product consumption by region for 2003 and EIA’s projections for 2030. As the figure shows, EIA projects that consumption will increase across all regions of the world, but members of the Organization for Economic Cooperation and Development (OECD) North America, which includes the United States, and non-OECD Asia, which includes China and India, are the major drivers of this growth. Future world oil demand will depend on such uncertain factors as world economic growth, future government policy, and consumer choices. Specifically: Economic growth drives demand for oil. For example, according to IEA, in 2003 the world experienced strong growth in oil consumption of 2.0 percent, with even stronger growth of 3.6 percent in 2004, from 79.8 million barrels per day to 82.6 million barrels per day and China accounted for 30 percent of this increase, driven largely by China’s almost 10 percent economic growth that year. EIA projects the Chinese economy will continue to grow, but factors such as the speed of reform of ineffective state-owned companies and the development of capital markets adds uncertainty to such projections and, as a result, to the level of future oil demand in China. Future government policy can also affect oil demand. For example, environmental concerns about gasoline’s emissions of carbon dioxide, which is a greenhouse gas, may encourage future reductions in oil demand if these concerns are translated into policies that promote biofuels. Consumer choices about conservation also can affect oil demand and thereby influence the timing of a peak. For example, if U.S. consumers were to purchase more fuel-efficient vehicles in greater numbers, this could reduce future oil demand in the United States, potentially delaying a time at which oil supply is unable to keep pace with oil demand. Such uncertainties that lead to changes in future oil demand ultimately make estimates of the timing of a peak uncertain, as is illustrated in an EIA study on peak oil. Specifically, using future annual increases in world oil consumption, ranging from 0 percent, to represent no increase, to 3 percent, to represent a large increase, and out of the various scenarios examined, EIA estimated a window of up to 75 years for when the peak may occur. Factors That Create Uncertainty about the Timing of the Peak Also Create Uncertainty about the Rate of Decline Factors that create uncertainty about the timing of the peak—in particular, factors that affect oil exploration and production—also create uncertainty about the rate of production decline after the peak. For example, IEA reported that technology played a key role in slowing the decline and extending the life of oil production in the North Sea. Uncertainty about the rate of decline is illustrated in studies that estimate the timing of a peak. IEA, for example, estimates that this decline will range somewhere between 5 percent and 11 percent annually. Other studies assume the rate of decline in production after a peak will be the same as the rise in production that occurred before the peak. Another methodology, employed by EIA, assumes that the resulting decline will actually be faster than the rise in production that occurred before the peak. The rate of decline after a peak is an important consideration because a decline that is more abrupt will likely have more adverse economic consequences than a decline that is less abrupt. Alternative Transportation Technologies Face Challenges in Mitigating the Consequences of the Peak and Decline In the United States, alternative transportation technologies have limited potential to mitigate the consequences of a peak and decline in oil production, at least in the near term, because they face many challenges that will take time and effort to overcome. If the peak and decline in oil production occur before these technologies are advanced enough to substantially offset the decline, the consequences could be severe. If the peak occurs in the more distant future, however, alternative technologies have a greater potential to mitigate the consequences. Development and Adoption of Technologies to Displace Oil Will Take Time and Effort Development and widespread adoption of the seven alternative fuels and advanced vehicle technologies we examined will take time, and significant challenges will have to be overcome, according to DOE. These technologies include ethanol, biodiesel, biomass gas-to-liquid, coal gas-to- liquid, natural gas and natural gas vehicles, advanced vehicle technologies, and hydrogen fuel cell vehicles. Ethanol Ethanol is an alcohol-based fuel produced by fermenting plant sugars. Currently, most ethanol in the United States is made from corn, but ethanol also can be made from cellulosic matter from a variety of agricultural products, including trees, grasses, and forestry residues. Corn ethanol has been used as an additive to gasoline for many years, but it is also available as a primary fuel, most commonly as a blended mix of 85 percent ethanol and 15 percent gasoline. As a primary fuel, corn ethanol is not currently available on a large national scale and federal agencies do not consider it to be cost-competitive with gasoline or diesel. The cost of corn feedstock, which accounts for approximately 75 percent of the production cost, is not projected to fall dramatically in the future, in part, because of competing demands for agricultural land use and competing uses for corn, primarily as livestock feed, according to DOE and USDA. DOE and USDA project that more cellulosic ethanol could ultimately be produced than corn ethanol because cellulosic ethanol can be produced from a variety of feedstocks, but more fundamental reductions in production costs will be needed to make cellulosic ethanol commercially viable. Production of ethanol from cellulosic feedstocks is currently more costly than production of corn ethanol because the cellulosic material must first be broken down into fermentable sugars that can be converted into ethanol. The production costs associated with this additional processing would have to be reduced in order for cellulosic ethanol to be cost-competitive with gasoline at today’s prices. In addition, corn and cellulosic ethanol are more corrosive than gasoline, and the widespread commercialization of these fuels would require substantial retrofitting of the refueling infrastructure—pipelines, storage tanks, and filling stations. To store ethanol, gasoline stations may have to retrofit or replace their storage tanks, at an estimated cost of $100,000 per tank. DOE officials also reported that some private firms consider capital investment in ethanol refineries to be risky for significant investment, unless the future of alternative fuels becomes more certain. Finally, widespread use of ethanol would require a turnover in the vehicle fleet because most current vehicle engines cannot effectively burn ethanol in high concentrations. Biodiesel Biodiesel is a renewable fuel that has similar properties to petroleum diesel but can be produced from vegetable oils or animal fats. It is currently used in small quantities in the United States, but it is not cost- competitive with gasoline or diesel. The cost of biodiesel feedstocks— which in the United States largely consist of soybean oil—are the largest component of production costs. The price of soybean oil is not expected to decrease significantly in the future owing to competing demands from the food industry and from soap and detergent manufacturers. These competing demands, as well as the limited land available for the production of feedstocks, also are projected to limit biodiesel’s capacity for large-volume production, according to DOE and USDA. As a result, experts believe that the total production capacity of biodiesel is ultimately limited compared with other alternative fuels. Biomass Gas-to-Liquid Biomass gas-to-liquid (biomass GTL) is a fuel produced from biomass feedstocks by gasifying the feedstocks into an intermediary product, referred to as syngas, before converting it into a diesel-like fuel. This fuel is not commercially produced, and a number of technological and economic challenges would need to be overcome for commercial viability. These challenges include identifying biomass feedstocks that are suitable for efficient conversion to a syngas and developing effective methods for preparing the biomass for conversion into a syngas. Furthermore, DOE researchers report that significant work remains to successfully gasify biomass feedstocks on a large enough scale to demonstrate commercial viability. In the absence of these developments, DOE reported that the costs of producing biomass GTL will be very high and significant uncertainty surrounding its ultimate commercial feasibility will exist. Coal Gas-to-Liquid Coal gas-to-liquid (coal GTL) is a fuel produced by gasifying coal into a syngas before being converted into a diesel-like fuel. This fuel is commercially produced outside the United States, but none of the production facilities are considered profitable. DOE reported that high capital investments—both in money and time—deter the commercial development of coal GTL in the United States. Specifically, DOE estimates that construction of a coal GTL conversion plant could cost up to $3.5 billion and would require at least 5 to 6 years to construct. Furthermore, potential investors are deterred from this investment because of the risks associated with the lengthy, uncertain, and costly regulatory process required to build such a facility. An expert at DOE also expressed concern that the infrastructure required to produce or transport coal may be insufficient. For example, the rail network for transporting western coal is already operating at full capacity and, owing to safety and environmental concerns, there is significant uncertainty about the feasibility of expanding the production capabilities of eastern coal mines. Coal GTL production also faces serious environmental concerns because of the carbon dioxide emitted during production. To mitigate the effect of coal GTL production, researchers are considering options for combining coal GTL production with underground injection of sequestered carbon dioxide to enhance oil recovery in aging oil fields. Natural Gas and Natural Gas Vehicles Natural gas is an alternative fuel that can be used as either a compressed natural gas or a liquefied natural gas. Natural gas vehicles are currently available in the United States, but their use is limited, and production has declined in the past few years. According to DOE, large-scale commercialization of natural gas vehicles is complicated by the widespread availability and lower cost of gasoline and diesel fuels. Furthermore, demand for natural gas in other markets, such as home heating and energy generation, presents substantial competitive risks to the natural gas vehicle industry. Production costs for natural gas vehicles are also higher than for conventional vehicles because of the incremental cost associated with a high-pressure natural gas tank. For example, light- duty natural gas vehicles can cost $1,500 to $6,000 more than comparable conventional vehicles, while heavy-duty natural gas vehicles cost $30,000 to $50,000 more than comparable conventional vehicles. Regarding infrastructure, retrofitting refueling stations so that they can accommodate natural gas could cost from $100,000 to $1 million per station, depending on the size, according to DOE. Although refueling at home can be an option for some natural gas vehicles, home refueling appliances are estimated to cost approximately $2,000 each. Advanced Vehicle Technologies Advanced vehicle technologies that we considered included lightweight materials and improvements to conventional engines that increase fuel economy, as well as hybrid vehicles and plug-in hybrid electric vehicles that use an electric motor/generator and a battery pack in conjunction with an internal combustion engine. Hybrid electric vehicles are commercially available in the United States, but these are not yet considered competitive with comparable conventional vehicles. DOE experts report that demand for such vehicles is predicated on their cost- competitiveness with comparable conventional vehicles. Hybrid electric vehicles, for example, cost $2,000 to $3,500 more to buy than comparable conventional vehicles and currently constitute around 1 percent of new vehicle registrations in the United States. In addition, electric batteries in hybrid electric vehicles face technical challenges associated with their performance and reliability when exposed to extreme temperatures or harsh automotive environments. Other advanced vehicle technologies, including advanced diesel engines and plug-in hybrids, are (1) in the very early stages of commercial release or are not yet commercially available and (2) face obstacles to large-scale commercialization. For example, advanced diesel engines present an environmental challenge because, despite their high fuel efficiency, they are not expected to meet future emission standards. Federal researchers are working to enable the engine to burn more cleanly, but these efforts are costly and face technical barriers. Plug-in hybrid electric vehicles are not yet commercially feasible because of cost, technical, and infrastructure challenges facing their development. For example, plug-in electric hybrids cost much more to produce than conventional vehicles, they require significant upgrades to home electrical systems to support their recharging, and researchers have yet to develop a plug-in electric with a range of more than 40 miles on battery power alone. Hydrogen Fuel Cell Vehicles A hydrogen fuel cell vehicle is powered by the electricity produced from an electrochemical reaction between hydrogen from a hydrogen- containing fuel and oxygen from the air. In the United States, these vehicles are still in the development stage, and making these vehicles commercially feasible presents a number of challenges. While a conventional gas engine costs $2,000 to $3,000 to produce, the stack of hydrogen fuel cells needed to power a vehicle costs $35,000 to produce. Furthermore, DOE researchers have yet to develop a method for feasibly storing hydrogen in a vehicle that allows a range of at least 300 miles before refueling. Fuel cell vehicles also are not yet able to last for 120,000 miles, which DOE believes to be the target for commercial viability. In addition, developing an infrastructure for distributing hydrogen—either through pipelines or through trucking—is expected to be complicated, costly, and time-consuming. Delivering hydrogen from a central source requires a large amount of energy and is considered costly and technically challenging. DOE has determined that decentralized production of hydrogen directly at filling stations could be a more viable approach than centralized production in some cases, but a cost-effective mechanism for converting energy sources into hydrogen at a filling station has yet to be developed. More detailed information on these technologies is provided in appendix IV. Consequences Could Be Severe If Alternative Technologies Are Not Available Because development and widespread adoption of technologies to displace oil will take time and effort, an imminent peak and sharp decline in oil production could have severe consequences. The technologies we examined currently supply the equivalent of only about 1 percent of U.S. annual consumption of petroleum products, and DOE projects that even under optimistic scenarios, these technologies could displace only the equivalent of about 4 percent of annual projected U.S. consumption by around 2015. If the decline in oil production exceeded the ability of alternative technologies to displace oil, energy consumption would be constricted, and as consumers competed for increasingly scarce oil resources, oil prices would sharply increase. In this respect, the consequences could initially resemble those of past oil supply shocks, which have been associated with significant economic damage. For example, disruptions in oil supply associated with the Arab oil embargo of 1973-74 and the Iranian Revolution of 1978-79 caused unprecedented increases in oil prices and were associated with worldwide recessions. In addition, a number of studies we reviewed indicate that most of the U.S. recessions in the post-World War II era were preceded by oil supply shocks and the associated sudden rise in oil prices. Ultimately, however, the consequences of a peak and permanent decline in oil production could be even more prolonged and severe than those of past oil supply shocks. Because the decline would be neither temporary nor reversible, the effects would continue until alternative transportation technologies to displace oil became available in sufficient quantities at comparable costs. Furthermore, because oil production could decline even more each year following a peak, the amount that would have to be replaced by alternatives could also increase year by year. Consumer actions could help mitigate the consequences of a near-term peak and decline in oil production through demand-reducing behaviors such as carpooling; teleworking; and “eco-driving” measures, such as proper tire inflation and slower driving speeds. Clearly these energy savings come at some cost of convenience and productivity, and limited research has been done to estimate potential fuel savings associated with such efforts. However, DOE estimates that drivers could improve fuel economy between 7 and 23 percent by not exceeding speeds of 60 miles per hour, and IEA estimates that teleworking could reduce total fuel consumption in the U.S. and Canadian transportation sectors combined by between 1 and 4 percent, depending on whether teleworking is undertaken for 2 days per week or the full 5-day week, respectively. If the peak occurs in the more distant future or the decline following a peak is less severe, alternative technologies have a greater potential to mitigate the consequences. DOE projects that the alternative technologies we examined have the potential to displace up to the equivalent of 34 percent of annual U.S. consumption of petroleum products in the 2025 through 2030 time frame. However, DOE also considers these projections optimistic—it assumes that sufficient time and effort are dedicated to the development of these technologies to overcome the challenges they face. More specifically, DOE assumes sustained high oil prices above $50 per barrel as a driving force. The level of effort dedicated to overcoming challenges to alternative technologies will depend in part on the price of oil, with higher oil prices creating incentives to develop alternatives. High oil prices also can spark consumer interest in alternatives that consume less oil. For example, new purchases of light trucks, SUVs, and minivans declined in 2005 and 2006, corresponding to a period of increasing gasoline prices. Gasoline demand has also grown slower in 2005 and 2006—0.95 and 1.43 percent, respectively—compared with the preceding decade, during which gasoline demand grew at an average rate of 1.81 percent. In the past, high oil prices have significantly affected oil consumption: U.S. consumption of oil fell by about 18 percent from 1979 to 1983, in part because U.S. consumers purchased more fuel-efficient vehicles in response to high oil prices. While current high oil prices may encourage development and adoption of alternatives to oil, if high oil prices are not sustained, efforts to develop and adopt alternatives may fall by the wayside. The high oil prices and fears of running out of oil in the 1970s and early 1980s encouraged investments in alternative energy sources, including synthetic fuels made from coal, but when oil prices fell, investments in these alternatives became uneconomic. More recently, private sector interest in alternative fuels has increased, corresponding to the increase in oil prices, but uncertainty about future oil prices can be a barrier to investment in risky alternative fuels projects. Recent polling data also indicate that consumers’ interest in fuel efficiency tends to increase as gasoline prices rise and decrease when gasoline prices fall. Federal Agencies Do Not Have a Coordinated Strategy to Address Peak Oil Issues Federal agency efforts that could contribute to reducing uncertainty about the timing of a peak in oil production or mitigating its consequences are spread across multiple agencies and are generally not focused explicitly on peak oil issues. Federal agency-sponsored studies have expressed a growing concern over the potential for a peak, and officials from key agencies have identified options for reducing the uncertainty about the timing of a peak in oil production and mitigating its consequences. However, there is no strategy for coordinating or prioritizing such efforts. Federal Agencies Have Many Programs and Activities Related to Peak Oil Issues, but Peak Oil Generally Is Not the Main Focus of These Efforts Federal agencies have programs and activities that could be directed to reduce uncertainty about the timing of a peak in oil production or to mitigate the consequences of such a peak. For example, with regard to reducing uncertainty, DOE provides information and analysis about global supply and demand for oil and develops projections about future trends. Specifically, DOE’s EIA regularly surveys U.S. operators to gather data about U.S. oil reserves and compiles reserves data for foreign countries from other sources. In addition, EIA prepares both a domestic and international energy outlook, which includes projections for future oil supply and demand. As previously discussed, USGS provides estimates of oil resources that have the potential to add to reserves in the United States. Interior’s Minerals Management Service also assesses oil resources in the offshore regions of the United States. In addition, several agencies conduct activities to encourage development of alternative technologies that could help mitigate the consequences of a decline in oil production. For example, DOE promotes development of alternative fuels and advanced vehicle technologies that could reduce oil consumption in the transportation sector by funding research and development of new technologies. In addition, USDA encourages development of biomass-based alternative fuels, by collaborating with industry to identify and test the performance of potential biomass feedstocks and conducting research to evaluate the cost of producing biomass fuels. DOT provides funding to encourage development of bus fleets that run on alternative fuels, promote carpooling among consumers, and conduct outreach and education concerning telecommuting. In addition, DOT is responsible for setting fuel economy standards for automobiles and light trucks sold in the United States. While these and other programs and activities could be used to reduce uncertainty about the timing of a peak in oil production and mitigate its consequences, agency officials we spoke with acknowledged that most of these efforts are not explicitly designed to do so. For example, DOE’s activities related explicitly to peak oil issues have been limited to conducting, commissioning, or participating in studies and workshops. Agencies Have Options to Reduce Uncertainty and Mitigate Consequences but Lack a Coordinated Strategy Several federally sponsored studies we reviewed reflect a growing concern about peak oil and identify a need for action. For example: DOE has sponsored two studies. A 2003 study highlighted the benefit of reducing the uncertainty surrounding the timing of a peak to mitigate its potentially severe global economic consequences. A 2005 study examined mitigating the consequences of a peak and concluded the following: “Timely, aggressive mitigation initiatives addressing both the supply and the demand sides of the issue will be required.” While EIA’s 2004 study of the timing of peak oil estimates that a peak might occur closer to 2050, EIA recognized that early preparation was important because of the long period required for widespread commercial production and adoption of new energy technologies. In its 2005 study of energy use in the military, the U.S. Army Corps of Engineers emphasized the need to develop alternative technologies and associated infrastructure before a peak and decline in oil production. In addition, in response to growing peak oil concerns, DOE asked the National Petroleum Council to study peak oil issues. The study is expected to be completed by June 2007. In light of these concerns, agency officials told us that it would be worthwhile to take additional steps to reduce the uncertainty about the timing of a peak in oil production. EIA believes it could reduce uncertainty surrounding the timing of peak oil production if it were to robustly extend the time horizon of its analysis and projection of global supply and demand for crude oil presented in its domestic and international energy outlooks. Currently, EIA’s projections extend only to 2030, and officials believe that consideration of peak oil would require a longer horizon. Also, the international outlook is fairly limited, in part because EIA no longer conducts its detailed Foreign Energy Supply Assessment Program. EIA is seeking to restart this effort in fiscal year 2007. In addition, USGS officials told us that better and more complete information about global oil resources could be used to improve estimates by EIA of the timing of a peak. USGS officials said their estimates of global oil resources could be improved or expanded in the following four ways: Add information on certain regions—which USGS refers to as “frontier regions”—where little is known about oil resources. Add information on nonconventional resources outside the United States. USGS believes these resources will play a large role in future oil supply, and, therefore, accurate estimates of these resources should be included in any attempts to determine the timing of a peak. Calculate reserves growth by country. USGS considers this information important because of the political and investment conditions that differ by country and will affect future oil production and exploration. Provide more complete information for all major oil-producing countries. USGS noted that its assessment has some “holes” where resources in major-producing countries have not yet been estimated completely. In addition to these actions reducing the uncertainty about the timing of a peak, agency officials also told us that they could take additional steps to mitigate the consequences of a peak. For example, DOE officials reported that they could expand their efforts to encourage the development of alternative fuels and advanced vehicle technologies. These efforts could be expanded by conducting more demonstrations of new technologies, facilitating greater information sharing among key industry players, and increasing cost share opportunities with industry for research and development. Agency officials told us such efforts can be essential to developing and encouraging the technologies. Although there are many options to reduce the uncertainty about the timing of a peak or to mitigate its potential consequences, according to DOE, there is no formal strategy to coordinate and prioritize federal programs and activities dealing with peak oil issues—either within DOE or between DOE and other key agencies. Conclusions The prospect of a peak in oil production presents problems of global proportion whose consequences will depend critically on our preparedness. The consequences would be most dire if a peak occurred soon, without warning, and were followed by a sharp decline in oil production because alternative energy sources, particularly for transportation, are not yet available in large quantities. Such a peak would require sharp reductions in oil consumption, and the competition for increasingly scarce energy would drive up prices, possibly to unprecedented levels, causing severe economic damage. While these consequences would be felt globally, the United States, as the largest consumer of oil and one of the nations most heavily dependent on oil for transportation, may be especially vulnerable among the industrialized nations of the world. In the longer term, there are many possible alternatives to using oil, including using biofuels and improving automotive fuel efficiency, but these alternatives will require large investments, and in some cases, major changes in infrastructure or break-through technological advances. In the past, the private sector has responded to higher oil prices by investing in alternatives, and it is doing so now. Investment, however, is determined largely by price expectations, so unless high oil prices are sustained, we cannot expect private investment in alternatives to continue at current levels. If a peak were anticipated, oil prices would rise, signaling industry to increase efforts to develop alternatives and consumers of energy to conserve and look for more energy-efficient products. Federal agencies have programs and activities that could be directed toward reducing uncertainty about the timing of a peak in oil production, and agency officials have stated the value in doing so. In addition, agency efforts to stimulate the development and adoption of alternatives to oil use could be increased if a peak in oil production were deemed imminent. While public and private responses to an anticipated peak could mitigate the consequences significantly, federal agencies currently have no coordinated or well-defined strategy either to reduce uncertainty about the timing of a peak or to mitigate its consequences. This lack of a strategy makes it difficult to gauge the appropriate level of effort or resources to commit to alternatives to oil and puts the nation unnecessarily at risk. Recommendation for Executive Action While uncertainty about the timing of peak oil production is inevitable, reducing that uncertainty could help energy users and suppliers, as well as government policymakers, to act in ways that would mitigate the potentially adverse consequences. Therefore, we recommend that the Secretary of Energy take the lead, in coordination with other relevant agencies, to prioritize federal agency efforts and establish a strategy for addressing peak oil issues. At a minimum, such a strategy should seek to do the following: Monitor global supply and demand of oil with the intent of reducing uncertainty surrounding estimates of the timing of peak oil production. This effort should include improving the information available to estimate the amount of oil, conventional and nonconventional, remaining in the world as well as the future production and consumption of this oil, while extending the time horizon of the government’s projections and analysis. Assess alternative technologies in light of predictions about the timing of peak oil production and periodically advise Congress on likely cost- effective areas where the government could assist the private sector with development and adoption of such technologies. Agency Comments and Our Evaluation We provided the Departments of Energy and the Interior with a draft of this report for their review and comment. DOE generally agreed with our message and recommendations and made several clarifying and technical comments, which we addressed in the body of the report as appropriate. Appendix V contains a reproduction of DOE’s letter and our detailed response to their comments. Specifically, DOE commented that the draft report did not make a distinction between a peak in conventional versus a peak in total (conventional and nonconventional) oil. We agree that we have not made this distinction, in part because the numerous studies of peak oil that we reviewed did not always make such a distinction. Furthermore, we do not believe a clear distinction between these two peak concepts is possible, in part because the definition of what is conventional oil versus nonconventional oil is not universally agreed on. However, the information we have reported regarding uncertainty about the timing of a peak applies to either peak oil concept. DOE also commented that our use of certain technical phrases, including the distinction between heavy and extra-heavy oils and the distinction between oil consumption and demand, may be confusing to some readers, and we have made changes to the text to avoid such confusion. DOE commented that the draft report wrongly attributed environmental concerns to the use of enhanced oil recovery techniques, stating that the environmental community prefers such techniques on existing oil fields to exploration and development of new fields. We do not disagree that the environmental costs of these techniques may be smaller than for other activities and we have added text to express DOE’s views on this matter. However, our point in listing the cost and environmental challenges of enhanced oil recovery techniques is that increasing oil production in the future could be more costly and more environmentally damaging than production of conventional oil, using primary production methods. For this reason we disagree with DOE’s comment that we should remove the references to environmental challenges. Finally, DOE pointed out that the draft report was primarily focused on transportation technologies that are used to power autonomous vehicles, and they stated that a broader set of technologies that could displace oil should be considered. We agree with their characterization of the draft report. We chose transportation technologies because transportation accounts for such a large part of U.S. oil consumption and because DOE and other agencies have numerous programs and activities dealing with technologies to displace oil in the transportation sector. We also agree that a broader set of technologies should be considered in the long run as potential ways to mitigate the consequences of a peak in oil production. We encourage DOE and other agencies to fully explore the options to displace oil as they implement our recommendations to develop a strategy to reduce the uncertainty surrounding the timing of a peak in oil production and advise Congress on cost-effective ways to mitigate the consequences. Interior generally agreed with our message and recommendations in the draft report and made clarifying and technical comments, which we addressed in the body of the report as appropriate. Appendix VI contains a reproduction of Interior’s letter and our detailed response to its comments. Specifically, Interior emphasized that it has a major role to play in estimating global oil resources, and that this effort should be made in conjunction with the efforts of DOE. We agree and encourage DOE to work in conjunction with Interior and other key agencies in establishing a strategy to coordinate and prioritize federal agency efforts to reduce the uncertainty surrounding the timing of a peak and to advise Congress on how best to mitigate consequences. Interior also commented that mitigating the consequences of a peak is outside their purview. We agree, and, in this report, we focus on examples of work that Interior could undertake to assist in reducing the uncertainty surrounding the estimates of global oil resources. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, other Members of Congress, the Secretaries of Energy and the Interior, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staffs need further information, please contact me at 202-512-3841 or wellsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix VII. Appendix I: Scope and Methodology To examine estimates of when oil production could peak, we reviewed key peak oil studies conducted by government agencies and oil industry experts. We limited our review to those studies that were published and excluded white papers or unpublished research. For studies that we cited in this report, we reviewed their estimate of the timing, methodology, and assumptions about the resource base to ensure that we properly represented the validity and reliability of their results and conclusions. We also consulted with federal government agencies and oil companies, as well as academic and research organizations, to identify the uncertainties associated with the timing of a peak. As part of our examination of the timing of peak oil production, we assessed other factors that could affect oil exploration and production. Specifically, we examined the challenges facing future technologies that could enhance the global production of oil, including technologies for increasing recovery from conventional reserves as well as technologies for producing nonconventional oil. To examine these technologies, we met with experts at the Department of Energy’s (DOE) National Energy Technology Laboratory, and synthesized information provided by these experts. In addition, we examined political and investment risks associated with global oil exploration and production using Global Insight’s Global Risk Service. For each country, Global Insight’s country risk analyst estimates the subjective probability of 15 discrete events for political risk, and 22 discrete events for investment risk in the upstream oil and gas sectors. The probability is estimated for the next 5 years. Senior analysts then meet to review the scores to ensure cross-country consistency. The summary score is derived by weighting different groups of factors and then summing across the groups. For political risk, external and internal political risks are the two groups of factors. For investment risk in the oil and gas sectors, the factors are: investment/maintenance risk, input risk, production risk, sales risk, and revenue/repatriation risk. We compared political and investment risk with Oil and Gas Journal oil reserves estimates. Oil and Gas Journal reserves estimates are limited by the fact that they are not independently verified by the publishers and are based on surveys filled out by the countries. Because most countries do not reassess annually, some estimates in this survey do not change each year. We divided the countries into risk categories of low, medium, and high on the basis of quartiles and natural break points in the data. To obtain the percentage of reserves held by public companies and by national oil companies, we used the Petroleum Intelligence Weekly list of top 50 companies worldwide. The Petroleum Intelligence Weekly data are limited by reliance on company reports and other information sources provided by companies and the generation of estimates for those companies that do not release regular or complete reports. Estimates were created for most of the state-owned oil companies in figure 9 of this report. The limitations of these data reflect the uncertainty in estimates of the amount of oil in the ground, and our report does not rely on precise estimates of oil reserves but rather on the uncertainty about the amount of oil throughout the world and the challenges to exploration and production of oil. Therefore, we found these data to be sufficiently reliable for the purposes of our report. We also spoke with officials at the Securities and Exchange Commission and with DOE as well as experts in academia and industry. In addition, we reviewed documents from the Department of the Interior and the International Energy Agency (IEA). To assess the potential for transportation technologies to mitigate the consequences of a peak and decline in oil production, we examined options to develop alternative fuels and technologies to reduce energy consumption in the transportation sector. In particular, we focused on technologies that would affect automobiles and light trucks. We consulted with experts to devise a list of key technologies in these areas and then reviewed DOE programs and activities related to developing these technologies. To assess alternative fuels and advanced vehicle technologies, we met with various experts at DOE, including representatives from the National Energy Technology Laboratory and the National Renewable Energy Laboratory, and reviewed information provided by officials from various offices at DOE. In addition, we spoke with officials from the U.S. Department of Agriculture (USDA) and the Department of Transportation regarding the development of these technologies in the United States. We did not attempt to comprehensively list all technologies or to conduct a governmentwide review of all programs, and we limited our scope to what government officials at key federal agencies know about the status of these technologies in the United States. In addition, we did not conduct a global assessment of transportation technologies. We reviewed numerous studies on the relationship between oil and the global economy and, in particular, on the experiences of past oil price shocks. To identify federal government activities that could address peak oil production issues, we spoke with officials at DOE and the United States Geological Survey (USGS), and gathered information on federal programs and policies that could affect uncertainty about the timing of peak oil production and the development of alternative transportation technologies. To gain further insights into the federal role and other issues surrounding peak oil production, we convened an expert panel in Washington, D.C., in conjunction with the National Research Council of the National Academy of Sciences. On May 5, 2006, these experts commented on the potential economic consequences of a transition away from conventional oil; factors that could affect the severity of the consequences; and what the federal role should be in preparing for or mitigating the consequences, among other things. We recorded and transcribed the meeting to ensure that we accurately captured the panel members’ statements. Appendix II: Key Peak Oil Studies This appendix lists the studies cited in figure 5 of this report. (a) L.F. Ivanhoe. “ Updated Hubbert Curves Analyze World Oil Supply.” World Oil. Vol. 217 (November 1996): 91-94. (b) Albert A. Bartlett. “ An Analysis of U.S. and World Oil Production Patterns Using Hubbert-Style Curves.” Mathematical Geology. Vol. 32, no.1 (2000). (c) Kenneth S. Deffeyes. “ World’s Oil Production Peak Reckoned in Near Future.” Oil and Gas Journal. November 11, 2002. (d) Volvo. Future Fuels for Commercial Vehicles. 2005. (e) A.M. Samsam Bakhtiari. “ World Oil Production Capacity Model Suggests Output Peak by 2006-2007.” Oil and Gas Journal. April 26, 2004. (f) Richard C. Duncan. “ Peak Oil Production and the Road to the Olduvai Gorge.” Pardee Keynote Symposia. Geological Society of America, Summit 2000. (g) David L. Greene, Janet L. Hopson, and Jai Li. Running Out Of and Into Oil: Analyzing Global Oil Depletion and Transition Through 2050. Oak Ridge National Laboratory, Department of Energy, October 2003. (h) C.J. Campbell. “ Industry Urged to Watch for Regular Oil Production Peaks, Depletion Signals.” Oil and Gas Journal. July 14, 2003. (i) Merril Lynch. Oil Supply Analysis. October 2005. (j) Ministére de l’Economie Des Finances et de l’Industrie. L’industrie pétrolière en 2004. 2005. (k) International Energy Agency. World Energy Outlook 2004. Paris France: 101-103. (l) Jean Laherrère. Future Oil Supplies. Seminar Center of Energy Conversion, Zurich: 2003. (m) Peter Gerling, Hilmar Remple, Ulrich Schwartz-Schampera, and Thomas Thielemann. Reserves, Resources and Availability of Energy Resources. Federal Institute for Geosciences and Natural Resources, Hanover, Germany: 2004. (n) John D. Edwards. “ Crude Oil and Alternative Energy Production Forecasts for the Twenty-First Century: The End of the Hydrocarbon Era.” American Association of Petroleum Geologists Bulletin. Vol. 81, no. 8 (August 1997). (o) Cambridge Energy Research Associates, Inc. Worldwide Liquids Capacity Outlook to 2010, Tight Supply or Excess of Riches. May 2005. (p) John H. Wood, Gary R. Long and David F. Morehouse. Long Term World Oil Supply Scenarios. Energy Information Administration: 2004. (q) Total. Sharing Our Energies: Corporate Social Responsibility Report 2004. (r) Shell International. Energy Needs, Choices and Possibilities: Scenarios to 2050. Global Business Environment: 2001. (s) Directorate-General for Research Energy. World Energy, Technology and Climate Policy Outlook: WETO 2030. European Commission, EUR 20366: 2003. (t) Exxon Mobil. The Outlook for Energy: A View to 2030. Corporate Planning. Washington, D.C.: November 2005. (u) Harry W. Parker. “ Demand, Supply Will Determine When World Oil Output Peaks.” Oil and Gas Journal. February 25, 2002. (v) M.A. Adelman and Michael C. Lynch. “ Fixed View of Resource Limits Creates Undue Pessimism.” Oil and Gas Journal. April 7, 1997. Appendix III: Key Technologies to Enhance the Supply of Oil This appendix contains brief profiles of technologies that could enhance the future supply of oil. This includes technologies for (1) increasing the rate of recovery from proven oil reserves using enhanced oil recovery; (2) producing oil from deepwater and ultra-deepwater reservoirs; and (3) producing nonconventional oil, such as oil sands and oil shale. For each technology, we provide a short description, followed by selected information on the key costs, potential production, readiness, key challenges, and current federal involvement. Although some of these technologies are in production or development throughout the world, the following profiles primarily focus on the development of these technologies in the United States. Enhanced Oil Recovery Enhanced oil recovery (EOR) refers to the third stage of oil production, whereby sophisticated techniques are used to recover remaining oil from reservoirs that have otherwise been exhausted through primary and secondary recovery methods. During EOR, heat (such as steam), gases (such as carbon dioxide (CO)), or chemicals are injected into the reservoir to improve fluid flow. Thermal and gas injection techniques account for almost all EOR activity in the United States, with CO EOR occurs in the Permian Basin in Texas. Most EOR efforts in the United States are currently managed by small, independent operators. Globally, EOR has been introduced in a number of countries, but North America is estimated to represent over half of all global EOR production. Costs associated with EOR production vary by reservoir, but reported marginal costs for oil recovery using EOR can range from $1.42 per barrel to $30 per barrel. Key capital costs include new drills, reworking of existing drills, reconfiguring gathering systems, and modification of the injection plant and other surface facilities. EOR currently contributes approximately 12 percent to the U.S. production of oil. EOR is projected to increase average recovery rates in reservoirs from 30 percent to 50 percent. Upper-end estimates of EOR’s future recovery potential in the United States include the following: 1.0 million barrels per day by 2015 and 2.5 million barrels per day by 2025. Thermal, gas, and chemical injection technologies are currently commercially available. Key areas for further development exist, including sweep efficiency and water shut-off methods. Key challenges facing the development of EOR include the following: (1) a lack of industry-accepted, economical fluid injection systems; (2) a reliance on out-of-date practices and limited data due to lack of familiarity with state-of-the-art imaging and reluctance to risk investment in technologies; and (3) unwillingness on the part of some operators to assume the risks associated with EOR. DOE is involved in several industry consortia and individual programs, designed to develop EOR, including conducting research and development and educating small producers about EOR. Deepwater and Ultra- Deepwater Drilling Deepwater drilling refers to offshore drilling for oil in depths of water between 1,000 and 5,000 feet, while ultra-deepwater drilling refers to offshore drilling in depths of water between 5,000 and 10,000 feet, according to DOE. The department reported that oil production at these depths involves a number of differences over shallow water drilling, such as drills that operate in extreme conditions, pipes that withstand deepwater ocean currents over long distances, and floating rigs as opposed to fixed rigs. The primary region for domestic deepwater drilling is the Gulf of Mexico, where deepwater drilling has become a major focus in recent years, particularly as near-shore oil production in shallow water has been declining. Globally, deepwater drilling occurs offshore in many locations, including Africa, Asia, and Latin America. Costs vary by rig type, but the three key components of cost for deepwater and ultra-deepwater drilling include the following: (1) the daily vessel rental rate, (2) materials, and (3) drilling services. The average market rate for Gulf of Mexico rigs can range from $210,000 per day to $300,000 per day. Overall, the projected marginal costs of deepwater drilling range from 3.0 to 4.5 times the cost of shallow water drilling. Current deepwater production in the Gulf of Mexico is estimated at 1.3 million barrels per day. Deepwater production in the Gulf of Mexico is projected to exceed 2 million barrels per day in the next 10 years. Commercial deepwater drilling at depths of more than 1,000 feet in the Gulf of Mexico has been under way since the mid-1970s. Companies are currently exploring prospects for drilling in depths of more than 5,000 feet, and since 2001, 11 discoveries of ultra-deepwater wells at depths of more than 7,000 feet have been announced. Examples of some of the key challenges facing the development of deepwater and ultra-deepwater drilling include the following: (1) rig issues, such as finding ways to adapt and use lower-cost rigs and improving the ability to moor vessels in deepwater; (2) drilling equipment reliability at high pressures and temperatures; and (3) reducing the costs of drilling and producing at deepwater and ultra-deepwater depths. DOE is not directly involved in deepwater and ultra-deepwater drilling, but it does fund projects that could impact such drilling. The Energy Policy Act of 2005 authorized some funding for research and development of alternative oil and gas activities, including deepwater drilling. Oil Sands Oil sands are deposits of bitumen, a thick, sticky form of crude oil, which is so heavy and viscous that it will not flow unless heated or diluted with lighter hydrocarbons. It must be rigorously treated to convert it into an upgraded crude oil before it can be used by refineries to produce gasoline and diesel fuels. While conventional crude flows naturally or is pumped from the ground, oil sands must be mined or recovered “in-situ,” or in place. During oil sands mining, approximately 2 tons of oil sands must be dug up, moved, and processed to produce 1 barrel of oil. During in-situ recovery, heat, solvents, or gases are used to produce the oil from oil sands buried too deeply to mine. The largest deposit of oil sands globally is found in Alberta, Canada—accounting for at least 85 percent of the world’s oil sands reserves—although DOE reported that deposits of oil sands can also be found in the United States in Alabama, Alaska, California, Texas, and Utah. Commercial Canadian oil sands are being produced at $18 to $22 per barrel. Key infrastructure costs to support oil sands production in the United States would include construction of roads, pipelines, water, and energy production facilities. The 2005 production of Canadian oil sands yielded 1.6 million barrels of oil per day and production is projected to grow to as much as 3.5 million barrels per day by 2030. Current U.S. production of oil sands currently yields less than 175,000 barrels per year, and future production of U.S. oil sands will depend on the industry’s investment decisions. Production of Canadian oil sands is currently in the commercial phase. U.S. oil sands production is only in the demonstration phase, and adapting Canadian technologies to the characteristics of U.S. oil sands will require time. Examples of key challenges facing the development of oil sands include the following: (1) evaluating and alleviating environmental impacts, particularly concerning water consumption; (2) accessing the federal lands on which most of the U.S. oil sands are located; (3) addressing the increased demand on roads, schools, and other infrastructure that would result from the need to construct production facilities in some remote areas of the west; and (4) addressing the increased need for natural gas, electricity, and water for production. There are currently no federal programs to develop the U.S. oil sands resource, although the Energy Policy Act of 2005 called for the establishment of a number of policies and actions to encourage the development of unconventional oils in the United States, including oil sands. The Bureau of Land Management, which manages most of the federal lands where oil sands occur, maintains an oil sands leasing program. Heavy and Extra- Heavy Oils Heavy and extra-heavy oils are dense, viscous oils that generally require advanced production technologies, such as EOR, and substantial processing to be converted into petroleum products. Heavy and extra- heavy oils differ in their viscosities and other physical properties, but advanced recovery techniques like EOR are required for both types of oil. Heavy and extra-heavy oil reserves occur in many regions around the world, with the Orinoco Oil Belt in Eastern Venezuela comprising almost 90 percent of the total extra-heavy oil in the world. In the United States, heavy oil reserves are primarily found in Alaska, California, and Wyoming, and some commercial heavy oil production is occurring domestically. The cost of producing heavy and extra-heavy oil is greater than the cost of producing conventional oil, due to, among other things, higher drilling, refining, and transporting costs. The 2005 Venezuelan extra-heavy oil production was estimated to be 600,000 barrels of oil per day and is projected to at least sustain this production rate through 2030. In 2004, production of heavy oil in California was 474,000 barrels per day. In December 2005, heavy oil production in Alaska was 42,500 barrels per day, but some project Alaskan production to increase to 100,000 barrels per day in 5 years. Extra-heavy oil production is in the commercial phase in Venezuela. Heavy oil production technologies are currently commercially available and employed in the United States. Development of the heavy oil resource in the United States faces environmental, economic, technical, permitting, and access-to-skilled- labor challenges. There has not been a specific DOE program focused on heavy oil, as most of the research and developments have been handled under the general research umbrella for EOR. The Energy Policy Act of 2005 called for an update of the 1987 technical and economic assessment of heavy oil resources in the United States. Oil Shale Oil shale refers to sedimentary rock that contains solid bituminous materials that are released as petroleum-like liquids when the rock is heated. To obtain oil from oil shale, the shale must be heated and the resultant liquid must be captured, in a process referred to as “retorting.” Oil shale can be produced by mining followed by surface retorting or by in-situ retorting. The largest known oil shale deposits in the world are in the Green River Formation, which covers portions of Colorado, Utah, and Wyoming. Estimates of the oil resource in place range from 1.5 trillion to 1.8 trillion barrels, but not all of the resource is recoverable. In addition to the Green River Formation, Australia and Morocco are believed to have oil shale resources. At the present time, a RAND study reported there are economic and technical concerns associated with the development of oil shale in the United States, such that there is uncertainty regarding whether industry will ultimately invest in commercial development of the resource. On the basis of currently available information, oil shale cannot compete with conventional oil production. At the present time, and given current technologies and information, Shell Oil reports that it may be able to produce oil shale for $25 to $30 per barrel. Infrastructure costs for oil shale production include the following: additional electricity, water, and transportation needs. A RAND study expects a dedicated power plant for the production of oil shale to exceed $1 billion. The Green River Basin is believed to have the potential to produce 3 million to 5 million barrels per day for hundreds of years. Given the current state of the technology and associated challenges, however, it is possible that 10 years from now, the oil shale resource could be producing 0.5 million to 1.0 million barrels per day. Oil shale is not presently in the research and development stage. Shell Oil has the most advanced concept for oil shale, and it does not anticipate making a decision regarding whether to attempt commercialization until 2010. Examples of key challenges facing the development of oil shale include the following: (1) controlling and monitoring groundwater, (2) permitting and emissions concerns associated with new power generation facilities, (3) reducing overall operating costs, (4) water consumption, and (5) land disturbance and reclamation. The Energy Policy Act of 2005 called for the establishment of a number of policies and actions to encourage the development of unconventional oils in the United States, including oil shale. Appendix IV: Key Technologies to Displace Oil Consumption in the Transportation Sector This appendix contains brief profiles of key technologies that could displace U.S. oil consumption in the transportation sector. These technologies include alternative fuels to supplement or substitute for gasoline as well as advanced vehicle technologies to increase fuel efficiency. For each technology, on the basis of information provided by federal experts, we provide a short description, followed by selected information on the costs, potential production or displacement of oil, readiness, key challenges, and current federal involvement. Although some of these technologies are in production or development throughout the world, the following profiles primarily focus on the development of these technologies in the United States. Ethanol Ethanol is a grain alcohol-based, alternative fuel made by fermenting plant sugars. It can be made from many agricultural products and food wastes if they contain sugar, starch, or cellulose, which can then be fermented and distilled into ethanol. Pure ethanol is rarely used for transportation; instead, it is usually mixed with gasoline. The most popular blend for light- duty vehicles is E85, which is 85 percent ethanol and 15 percent gasoline. The technology for producing ethanol, at least from certain feedstocks, is generally well established, and ethanol is currently produced in many countries around the world. In Brazil, the world’s largest producer, ethanol is produced from sugar cane. In the United States, more than 90 percent of ethanol is produced from corn, but efforts are under way to develop methods for producing ethanol from other biomass materials, including forest trimmings and agricultural residues (cellulosic ethanol). Currently, corn ethanol is primarily produced and used across the Midwest. The current cost of producing ethanol from corn is between $0.90 to $1.25 per gallon, depending on the plant size, transportation cost for the corn, and the type of fuel used to provide steam and other energy needs for the plant. The projected cost of producing ethanol from biomass is expected to drop significantly to about $1.07 per gallon by 2012. The current cost of producing of ethanol from biomass is not cost competitive, but by 2012 it is projected to be about $1.07 per gallon. Key infrastructure costs associated with ethanol include retrofitting refueling stations to accommodate E85 (estimated at between $30,000 and $100,000) and constructing or modifying pipelines to transport ethanol. The 2005 production of ethanol in the United States was approximately 4 billion gallons. By 2014-15, corn ethanol production is expected to peak at approximately 9 billion to 18 billion gallons annually. Assuming success with cellulosic ethanol technologies, experts project cellulosic ethanol production levels of over 60 billion gallons by 2025-30. Corn ethanol is commercially produced today and continues to expand rapidly. Cellulosic ethanol is in the demonstration phase, but it is projected to be demonstrated by 2010. For corn ethanol, key challenges include the necessary infrastructure changes to support ethanol distribution and the ability and willingness of consumers to adapt to ethanol. For cellulosic ethanol, several technical challenges still remain, including improving the enzymatic pretreatment, fermentation, and process integration. For cellulosic ethanol, economic challenges are high feedstock and production costs and the initial capital investment. The federal government is currently involved in numerous efforts to develop ethanol. Several federal agencies collaborate with industry to accelerate the technologies, reduce the cost of the technologies, and assist in developing the infrastructure. Biodiesel Biodiesel is a renewable fuel that has similar properties to petroleum diesel, but it can be produced from vegetable oils or animal fats. Like petroleum diesel, biodiesel operates in compression-ignition engines. Blends of up to 20 percent biodiesel (B20) can be used in nearly all diesel equipment and are compatible with most storage and distribution equipment. These low-level blends generally do not require any engine modifications. Higher blends and 100 percent biodiesel (B100) may be used in some engines with little or no modification, although transportation and storage of B100 requires special management. Biodiesel is currently produced and used as a transportation fuel around the world. In the United States, the biodiesel industry is small but growing rapidly, and refueling stations with biodiesel can be found across the country. The current wholesale cost of pure biodiesel (B100) ranges from about $2.90 to $3.20 per gallon, although recent sales have been reported at $2.75 per gallon. To date, there has been limited evaluation of the projected infrastructure costs required for biodiesel. However, it is acknowledged that there are infrastructure costs associated with installation of manufacturing capacity, distribution, and blending of the biodiesel. In 2005, U.S. production of biodiesel was 75 million gallons, and DOE projects about 3.6 billion gallons per year by 2015. Under a more speculative scenario requiring major changes in land use and price supports, experts project it would be possible to produce 10 billion gallons of biodiesel per year. While biodiesel is commercially available, in many ways it is still in development and demonstration. Key areas of focus for development and demonstration include quality, warranty coverage, and impact of air pollutant emissions and compatibility with advanced control systems. Experts project that, with adequate resources, key remaining developments could be resolved in the next 5 years. Initial capital costs are significant and the technical learning curve is steep, which deters many potential investors. Economic challenges are significant for biodiesel. In the absence of the $1 per gallon excise tax, biodiesel would not likely be cost-competitive. DOE is currently collaborating with the biodiesel and automobile industries in funding research and development efforts on biodiesel use, and USDA is conducting research on feedstocks. Coal and Biomass Gas-to-Liquids Gas-to-liquid (GTL) alternatives include the production of liquid fuels from a variety of feedstocks, via the Fisher-Tropsch process. In the Fischer- Tropsch process, feedstocks such as coal and biomass are converted into a syngas, before the gas is converted into a diesel-like fuel. The diesel-like fuel is low in toxicity and is virtually interchangeable with conventional diesel fuels. Although these technologies have been available in some form since the 1920s, and coal GTL was used heavily by the German military during World War II, GTL technologies are not widely used today. Currently, there is no commercial production of biomass GTL and the only commercial production of coal GTL occurs in South Africa, where the Sasol Corporation currently produces 150,000 barrels of fuel from coal per day. Extensive research and development, however, is currently under way to further develop this technology because automakers consider GTL fuels viable alternatives to oil without compromising fuel efficiency or requiring major infrastructure changes. Coal. Construction of a precommercial coal GTL plant is estimated at $1.7 billion, while construction of a commercial coal GTL is estimated at $3.5 billion. Biomass. Potential costs associated with biomass GTL are uncertain, given the early stage of the technology. Infrastructure costs associated with both biomass and coal GTLs are expected to be substantial, given the necessary modifications to pipelines, refueling centers, and storage facilities. Coal. Experts project that, at most, 80,000 barrels per day could be produced by 2015 and 1.7 million barrels per day by 2030. Biomass. Some experts project biomass GTL to have the potential to produce up to approximately 1.4 million barrels-of-oil-equivalent per day by 2030. Coal. Coal GTL is commercially available in South Africa, but the technology has not yet been commercially adopted in the United States. Biomass. Biomass GTL is currently in research and development, nearing the demonstration stage. Experts project that biomass GTL production could be demonstrated at the pilot scale by 2012. Coal. Key challenges facing coal GTL include technology integration, for example integrating various processes with combined cycle turbine and CO Light-duty natural gas vehicles are estimated to cost an additional $1,000 per vehicle. Heavy-duty natural gas vehicles are estimated to cost an additional $10,000 to $30,000 per vehicle. Natural gas refueling stations are estimated to cost $100,000 to $1 million to build, while home fueling appliances cost approximately $2,000 per year. Currently, natural gas vehicles displace approximately 65 million gallons of diesel fuel per year. There is a potential niche market in heavy-duty vehicles for natural gas, which could displace 1,500 million gallons of gasoline per year. Natural gas vehicles are commercially available now, but their overall use is limited on a national scale and production has been declining in recent years. Heavy-duty natural gas vehicles are in the final stages of research and development. Examples of some key challenges facing the adoption of natural gas vehicles include the following: (1) the higher cost of high-pressure fuel tanks for consumers, (2) the costly upgrades to the existing refueling infrastructure, and (3) the availability and cost of natural gas. There is currently no federal funding or research focusing on natural gas vehicles. Advanced Vehicle Technologies Vehicle technologies encompass several different efforts to reduce vehicles’ oil consumption. Increasing the efficiency of the internal combustion engine, specifically advanced diesel engines, is considered a first step toward other engine technologies. For example, researchers are working to improve the emissions profile of advanced diesel engines through techniques such as low-temperature combustion, which would enable the engine to burn more cleanly so that emissions control at the tailpipe is less burdensome. Another set of technologies are hybrid electric and plug-in hybrid electric vehicles. Hybrid vehicles use a battery alongside the internal combustion engine to facilitate the capture of braking energy as well as to provide propulsion, while plug-in hybrids use a different battery and can be powered by battery alone for an extended period. Researchers are examining how to build longer-lasting and less- expensive batteries for hybrid and plug-in hybrid vehicles. Finally, a range of ongoing work is attempting to improve the efficiency of conventional vehicles. For example, lightweight materials have the potential to improve efficiency by reducing vehicle weight. Oil consumption can also be cut by reducing the rolling resistance of tires, increasing the efficiency of transmission technologies that move the energy from the engine to the tires, and improving how power is managed within the vehicle. Advanced diesel engines. DOE does not have information on the potential cost of this technology. Officials told us that this information is proprietary. Hybrid electric and plug-in hybrid vehicles. DOE officials told us that these vehicles can cost several thousand dollars more than conventional vehicles, although some of the incremental cost in hybrid vehicles currently on the market may be related to additional amenities, rather than the hybrid technology. Lightweight materials. DOE officials told us that lightweight carbon fiber materials currently cost approximately $12 to $15 per pound, and that their goal is to reduce this cost to $3 to $5 per pound. Information was not available on costs associated with other technologies to improve conventional vehicle efficiency. DOE estimates that the oil savings that would result from its vehicle technology efforts, including research on internal combustion engines, hybrids, and other vehicle efficiency measures, is 20,000 barrels per day by 2010, up to 1.07 million barrels per day by 2025. DOE was not able to estimate oil savings for plug-in hybrids for fiscal year 2007. Advanced diesel engines. Low-temperature combustion that would reduce the emissions burden of diesel engines is under research and development. Hybrid electric and plug-in hybrid electric vehicles. Hybrid electric vehicles are currently on the market, although research continues on longer-lasting, less expensive batteries for both hybrid and plug-in hybrid electric vehicles. DOE’s goal is to have plug-in hybrids commercially available by 2014, although officials considered this an aggressive goal. Lightweight vehicle materials. Lightweight materials, such as aluminum, magnesium, and polymer composites, have made inroads into vehicle manufacturing. However, research and development are still under way on reducing the costs of these materials. By 2012, DOE aims to make the life- cycle costs of glass- and carbon-fiber-reinforced composites, along with several other lightweight materials, comparable to the costs for conventional steel. Advanced diesel engines. Reducing the emissions of nitrogen oxides and particulate matter to meet government requirements is a key challenge for the diesel engine combustion process. Emissions reduction will help make more efficient advanced diesel engines cost-competitive with gasoline engines because it will reduce the cost and energy consumption of tailpipe emissions treatment. Hybrid electric and plug-in hybrid electric vehicles. Battery cost is one of the central challenges for hybrid electric and plug-in hybrid electric vehicles. DOE officials told us that their goal is to reduce the cost of a battery pack for a hybrid electric vehicle from approximately $920 today to $500 by 2010. Technological challenges include extending the life of the battery pack to last the life of the car, and improving power electronics in the vehicle. Researchers are using lithium-ion and lithium polymer chemistries in the next generation of batteries, instead of the current nickel metal hydride. Officials told us that plug-in hybrids face infrastructure challenges, such as the capacity of household electric wiring systems to recharge a plug-in, and the capacity of the electricity grid if plug-in hybrids are widely adopted. Battery lifetime and cost are also challenges for plug-in hybrids. Lightweight vehicles. The cost of lightweight materials is the largest barrier to their widespread adoption. In addition, manufacturing capacity for lightweight materials occurs primarily in the aerospace industry and is not available for producing automotive components for lightweight materials. Advanced diesel engines. DOE currently conducts research into combustion technology. For example, federal funds are supporting fundamental research to understand low-temperature combustion technology, and the industry is attempting to establish the operating parameters of an engine that facilitate low-temperature combustion. Hybrid electric and plug-in hybrid electric vehicles. DOE’s FreedomCAR program sponsors research that supports the development of hybrid vehicles, specifically with respect to improving the performance, and reducing the cost, of electric batteries. Lightweight vehicles. DOE currently funds research and development on lightweight materials. Hydrogen Fuel Cell Vehicles A hydrogen fuel cell vehicle is powered by the electricity produced from an electrochemical reaction between hydrogen from a hydrogen- containing fuel and oxygen from the air. A fuel cell power system has many components, the key one being the fuel cell “stack,” which is many thin, flat cells layered together. Each cell produces energy and the output of all of the cells is used to power a vehicle. Currently, hydrogen fuel cell vehicles are still under development in the United States, and a number of challenges remain for them to become commercially viable. In the United States, government and industry are working on research and demonstration efforts, to facilitate the development and commercialization of hydrogen fuel cell vehicles. Because hydrogen fuel cells are still in an early stage of development, the ultimate cost of hydrogen fuel cells is uncertain, but the goal is to make them competitive with gasoline-powered vehicles. A fuel cell stack currently costs about $35,000, and a hydrogen fuel cell vehicle about $100,000. An ongoing cost-share effort between the federal government and the industry is working toward price targets of $2 to $3 per gallon of gasoline equivalent for hydrogen at the refueling station. Federal experts project that hydrogen fuel cell vehicles could have the potential to displace 0.28 million barrels per day by 2025. Hydrogen fuel cell vehicle technologies are still in research, development, and demonstration. Federal experts project that the technology is not likely to be commercially viable before 2015. Key challenges facing the commercialization of hydrogen fuel cell vehicles include the following: (1) hydrogen storage; (2) cost and durability of the fuel cell; and (3) infrastructure costs for producing, distributing, and delivering hydrogen. The federal government conducts research with industry to improve the feasibility of the technology and reduce the costs. The government facilitates information-sharing among industry leaders by analyzing sensitive information on hydrogen fuel cell performance from leading automotive and oil companies. Appendix V: Comments from the Department of Energy The following are GAO’s comments on the Department of Energy’s letter dated February 7, 2007. GAO Comments 1. We agree that we have not defined a peak as either a peak in conventional or total oil—conventional plus nonconventional. In the course of our study, we found that experts conducting the timing of peak oil studies also do not agree on a single peak concept. Different studies by these experts use different estimates for oil remaining and, as a result, implicitly have different concepts of a peak—a conventional versus a total oil peak. We have added language to the report to clarify this point. The lack of agreement on a peak concept mirrors the disagreement about the very definition of conventional oil versus nonconventional oil. The distinction regarding what portion of heavy oil is conventional is debated by experts. For example, USGS would consider the heavy oil produced in California as conventional oil, while IEA would not—the latter considers all heavy (and extra- heavy) oil to be nonconventional. For the purposes of this report, we have adopted IEA’s definition of nonconventional oil, which includes all heavy oil. 2. We agree that the use of heavy and extra-heavy oil may be confusing in sections of this report, and we have implemented some of the suggestions that DOE provided in their technical comments. 3. With regard to the inclusion of some ethanol in petroleum consumption as reported on page 1 of the report, we asked EIA staff to identify how much of such nonpetroleum liquids are in the figure. They told us that just under one-third of 1 percent of the world petroleum consumption data they report is comprised of ethanol, and we noted this in a footnote on page 1 of the report. We decided to continue to call it petroleum consumption, rather than “liquids consumption” as suggested by DOE because the former is what EIA calls it and because the nonpetroleum component is so small. 4. We agree that our language regarding the use of oil consumption and oil demand is confusing in some sections of the report. Overall, the report makes the point that, all other things equal, the faster the world consumes oil, the sooner we will use up the oil and reach a peak. The report also makes the point that future demand for oil, which depends on many factors, including world economic growth, will determine just how fast we consume oil. We have made some changes to the text to clarify when we are talking about consumption of oil and when we are talking about the demand for oil. 5. We do not disagree that the environmental costs of EOR are lower than for some of the other technologies examined, and we did not try to rank the environmental costs of all the alternatives we examined. However, we believe that these costs are relevant for assessing the potential impacts of producing more of our oil using such technologies. Therefore, we left that discussion in the report but added language attributing DOE’s views on this. 6. We agree with DOE’s assessment that there is a broader range of transportation technologies besides those used to power autonomous vehicles. We chose to focus on the technologies that experts currently believe have the most potential for reducing oil consumption in the light-duty vehicle sector, which accounts for 60 percent of the transportation sector’s consumption of petroleum-based energy. We encourage DOE and other agencies to consider the full range of oil- displacing technologies as they implement our recommendations to develop a strategy to reduce uncertainty about the timing of a peak in oil production and advise Congress on cost-effective ways to mitigate the consequences of such a peak. Appendix VI: Comments from the Department of the Interior The following are GAO’s comments on the Department of the Interior’s letter dated February 14, 2007. GAO Comments 1. We agree that DOE and Interior will both play a vital role in implementing our recommendation. We have made the appropriate wording change to the Highlights page of the report to clarify that our recommendation is that DOE work in conjunction with other key agencies to establish a strategy to coordinate and prioritize federal agency efforts to reduce the uncertainty surrounding the timing of a peak and to advise Congress on how best to mitigate consequences. 2. We agree that mitigating the consequences of a peak is outside the purview of Interior. The examples cited highlight the areas where Interior can help reduce the uncertainty surrounding the estimates of global resources. We have changed the wording accordingly to make this distinction clear. Appendix VII: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact person named above, Mark Gaffigan, Acting Director; Frank Rusco, Assistant Director; Godwin Agbara; Vipin Arora; Virginia Chanley; Mark Metcalfe; Cynthia Norris; Diahanna Post; Rebecca Sandulli; Carol H. Shulman; Barbara Timmerman; and Margit Willems- Whitaker made key contributions to this report.
The U.S. economy depends heavily on oil, particularly in the transportation sector. World oil production has been running at near capacity to meet demand, pushing prices upward. Concerns about meeting increasing demand with finite resources have renewed interest in an old question: How long can the oil supply expand before reaching a maximum level of production--a peak--from which it can only decline? GAO (1) examined when oil production could peak, (2) assessed the potential for transportation technologies to mitigate the consequences of a peak in oil production, and (3) examined federal agency efforts that could reduce uncertainty about the timing of a peak or mitigate the consequences. To address these objectives, GAO reviewed studies, convened an expert panel, and consulted agency officials. Most studies estimate that oil production will peak sometime between now and 2040. This range of estimates is wide because the timing of the peak depends on multiple, uncertain factors that will help determine how quickly the oil remaining in the ground is used, including the amount of oil still in the ground; how much of that oil can ultimately be produced given technological, cost, and environmental challenges as well as potentially unfavorable political and investment conditions in some countries where oil is located; and future global demand for oil. Demand for oil will, in turn, be influenced by global economic growth and may be affected by government policies on the environment and climate change and consumer choices about conservation. In the United States, alternative fuels and transportation technologies face challenges that could impede their ability to mitigate the consequences of a peak and decline in oil production, unless sufficient time and effort are brought to bear. For example, although corn ethanol production is technically feasible, it is more expensive to produce than gasoline and will require costly investments in infrastructure, such as pipelines and storage tanks, before it can become widely available as a primary fuel. Key alternative technologies currently supply the equivalent of only about 1 percent of U.S. consumption of petroleum products, and the Department of Energy (DOE) projects that even by 2015, they could displace only the equivalent of 4 percent of projected U.S. annual consumption. In such circumstances, an imminent peak and sharp decline in oil production could cause a worldwide recession. If the peak is delayed, however, these technologies have a greater potential to mitigate the consequences. DOE projects that the technologies could displace up to 34 percent of U.S. consumption in the 2025 through 2030 time frame, if the challenges are met. The level of effort dedicated to overcoming challenges will depend in part on sustained high oil prices to encourage sufficient investment in and demand for alternatives. Federal agency efforts that could reduce uncertainty about the timing of peak oil production or mitigate its consequences are spread across multiple agencies and are generally not focused explicitly on peak oil. Federally sponsored studies have expressed concern over the potential for a peak, and agency officials have identified actions that could be taken to address this issue. For example, DOE and United States Geological Survey officials said uncertainty about the peak's timing could be reduced through better information about worldwide demand and supply, and agency officials said they could step up efforts to promote alternative fuels and transportation technologies. However, there is no coordinated federal strategy for reducing uncertainty about the peak's timing or mitigating its consequences.
Background This section describes: (1) NNSA organization and management, (2) DOE and NNSA cost estimating requirements and guidance for projects, (3) DOE and NNSA cost estimating requirements and guidance for programs, (4) our 2009 cost estimating best practices, and (5) federal standards for internal controls. NNSA Organization and Management To fund NNSA’s projects and programs, the President requested approximately $11.7 billion for NNSA in the fiscal year 2015 budget submission to Congress, the majority of which is intended to fund program operations. Work activities on both projects and programs are largely carried out by management and operating (M&O) contractors at NNSA’s eight government-owned, contractor-operated sites—collectively referred to as the nuclear security enterprise. Among other things, these contractors operate and maintain the government-owned facilities and infrastructure deemed necessary to support the nuclear security enterprise and to support the capabilities to conduct scientific, technical, engineering, and production activities. As shown in figure 1, the nuclear security enterprise sites include national research and development laboratories, as well as production plants. Specifically, NNSA manages three national nuclear weapons design laboratories—Lawrence Livermore National Laboratory in California, Los Alamos National Laboratory in New Mexico, and Sandia National Laboratories in New Mexico and California. It also manages four nuclear weapons production plants—the Pantex Plant in Texas, the Y-12 National Security Complex in Tennessee, the Kansas City Plant in Missouri, and Tritium Operations at DOE’s Savannah River Site in South Carolina. NNSA also manages the Nevada National Security Site, formerly known as the Nevada Test Site. Various headquarters organizations within NNSA develop policies and NNSA site offices, collocated with NNSA’s sites, conduct day-to-day oversight of the M&O contractors, and evaluate the M&O contractors’ performance in carrying out the sites’ missions. The Secretary of Energy is authorized under the National Nuclear Security Administration Act to establish policy and provide direction to NNSA.authority to establish NNSA-specific policies, unless disapproved by the The NNSA Administrator, however, is also vested with the Secretary of Energy. NNSA does this through the issuance of Policy Letters. According to NNSA officials responsible for project management, DOE directives and orders and NNSA Policy Letters, including business operating procedures, collectively establish requirements for the development of NNSA project and program cost estimates, and we refer to them as such throughout this document. DOE and NNSA Cost Estimating Requirements and Guidance for Projects NNSA is required to manage its projects, including the development of cost estimates, in accordance with DOE Order 413.3B. The purpose of the order is to provide program and project management direction for the acquisition of capital assets, and it includes requirements for developing project cost estimates. The order also provides management direction for NNSA and other DOE offices with the goal of delivering projects within the original performance baseline that are fully capable of meeting mission performance and other requirements such as environmental, safety, and health standards.or critical decision points—that span the life of a project. These critical decision (CD) points are: The order defines five major milestones— CD-0: Approve mission need. CD-1: Approve alternative selection and cost range. CD-2: Approve project performance baseline. CD-3: Approve start of construction. CD-4: Approve start of operations or project completion. The order specifies requirements that must be met, along with the documentation necessary, to move a project past each CD point. In addition, the order requires senior management to review the supporting documentation and decide whether to approve the project at each CD. DOE also provides suggested approaches for meeting the requirements contained in Order 413.3B through a series of guides, such as guides for project reviews and risk management. The order includes several cost estimating requirements for projects. Specifically, the order requires: a cost estimate be provided at each CD point; the degree of rigor and detail for a cost estimate be carefully defined, depending on the degree of confidence in project scale and scope that is reasonable to expect at that stage; a cost estimate range for CD-0 and 1 explicitly noting any relevant caveats concerning uncertainties; and a program sponsor never be the sole cost estimator at any critical decision given the inherent conflict of interest, and a second cost estimator should come from outside of the line manager’s chain of command to avoid conflict of interest. The order also establishes requirements for the review of cost estimates for certain projects at CD-0 through CD-3, depending on the estimated cost of the project. Table 1 shows these requirements and an NNSA requirement for review. As a supplement to Order 413.3B, DOE provides guidance for the development of cost estimates in its Cost Estimating Guide, which was issued in May 2011. According to the guide, its purpose is to provide uniform guidance and best practices that describe recommended methods and procedures that projects and programs could use for preparing cost estimates. The guide states that it is based on accepted industry practices and processes, including practices outlined in our 2009 Cost Estimating and Assessment Guide, to meet federal and DOE requirements. DOE and NNSA Cost Estimating Requirements and Guidance for Programs DOE and NNSA cost estimating requirements and guidance for programs are primarily defined in four documents. DOE Order 130.1. This order requires all DOE and NNSA program budget requests be based on cost estimates that have been thoroughly reviewed and deemed reasonable prior to their inclusion in department budgets. DOE Cost Estimating Guide 413.3-21. DOE’s cost estimating guide is primarily intended for use in managing the phases involved in the acquisition of a capital asset project. However, the guide states that it also could be used by DOE and NNSA programs in preparing cost estimates. NNSA Planning, Programming, Budgeting, and Evaluation (PPBE) Requirements. NNSA’s PPBE process provides a framework for the agency to plan, prioritize, fund, and evaluate its program activities. This process includes a budget validation step to review the process contractors follow in developing program cost estimates for budgeting purposes. NNSA Phase 6.X Process. NNSA manages life extension programs, including the B61 LEP, according to a process called Phase 6.X that was developed by DOE in collaboration with the Department of Defense. The Phase 6.X process provides NNSA with a framework to conduct and manage LEP activities, including the development and documentation of a program cost estimate. GAO Cost Estimating Best Practices Drawing from federal cost estimating organizations and industry, our cost estimating guide provides best practices about the processes, procedures, and practices needed for ensuring development of high- quality—that is, reliable cost estimates. A high-quality cost estimate helps ensure that management is given the information it needs to make informed decisions. The guide identifies the following four characteristics of a high-quality cost estimate. Specifically, such an estimate is: Credible when it has been cross-checked with an ICE, the level of confidence associated with the point estimate has been identified through the use of risk and uncertainty analysis, and a sensitivity analysis has been conducted; Well-documented when supporting documentation is accompanied by a narrative explaining the process, sources, and methods used to create the estimate and contains the underlying data used to develop the estimate; Accurate when it is not overly conservative or too optimistic and based on an assessment of the costs most likely to be incurred; and Comprehensive when it accounts for all possible costs associated with a project, it contains a cost estimating structure in sufficient detail to ensure that costs are neither omitted nor double-counted, and the estimating teams’ composition is commensurate with the assignment. To develop a cost estimate that embodies these four characteristics, our cost estimating guide lays out 12 best practice steps. For example, one step—determining the estimating structure—includes the need to develop a “product-oriented” work breakdown structure (WBS) that reflects the requirements and basis for identifying resources and tasks necessary to accomplish the project’s objectives. A product-oriented WBS is organized to reflect the cost, schedule, and technical performance of project components. Such a WBS allows a project to track cost by defined deliverables, promote accountability by identifying work products that are independent of one another, and provides a basis for identifying resources and tasks for developing a cost estimate. Table 2 shows the four characteristics of a high-quality cost estimate with the corresponding steps. Federal Standards for Internal Controls In addition to the requirements for cost estimating established by DOE and NNSA, according to Standards for Internal Control in the Federal Government, federal agencies are to employ internal control activities, such as functional reviews by management of projects and programs to compare actual performance to planned or expected results throughout the organization and analyze significant differences. Such reviews, which may include reviews of program cost estimates, are to help ensure that management’s directives are carried out and to determine if agencies are effectively and efficiently using resources. DOE and NNSA Cost Estimating Requirements and Guidance Generally Do Not Reflect Best Practices for Developing Cost Estimates DOE and NNSA cost estimating requirements and guidance for projects and programs generally do not reflect best practices for developing cost estimates. For projects, DOE requires one best practice, and DOE’s cost estimating guide does not fully reflect cost estimating best practices, and is not mandatory. For programs, DOE does not require the use of any of the best practices to develop cost estimates, and NNSA has taken steps to close gaps in the existing cost estimating framework for programs, such as creating a cost analysis office, but these steps have not included a requirement to use best practices. Projects Are Not Required to Meet Most Best Practice Steps, and Guidance Describes Most Best Practice Steps but Is Not Mandatory DOE and NNSA cost estimating requirements and guidance for projects generally do not reflect best practices for developing cost estimates. DOE’s 2010 project management order requires the use of only 1 of the 12 cost estimating best practice steps. Specifically, the order requires an ICE be prepared at CD-2 and CD-3 for projects with an estimated cost of $100 million or greater. The order requires the development of an ICE at CD-3 if warranted by risk and performance indicators or as designated by DOE or NNSA management. In addition, NNSA’s 2014 requirement for an ICE or independent cost review could subject additional projects with an estimate of a cost of less than $100 million to an ICE, but this would depend on whether NNSA chooses to conduct an ICE rather than the less rigorous independent cost review. None of the other cost estimating requirements in the order, such as the need for a cost estimate at each CD point, ensure that project cost estimates will be prepared in accordance with cost estimating best practices. For example, the order does not require any of the other 11 best practice steps such as conducting a risk and uncertainty analysis, identifying ground rules and assumptions, documenting the estimate, developing a point estimate, or determining the estimating structure. According to the DOE officials responsible for developing DOE’s project management order, DOE has chosen to not require all cost estimating best practices in the order and instead includes suggested approaches for developing cost estimates in the DOE cost estimating guide that accompanies the order. However, because neither DOE nor NNSA requires the use of most cost estimating best practices for its projects, it is unlikely that NNSA and its contractors will consistently develop reliable cost estimates. DOE’s 2011 cost estimating guide describes most best practices, but it is not mandatory and it is not referenced in the order. We found that the guide fully or substantially describes 10 of the 12 best practices. However, the guide only partially or minimally contains information about the other 2 best practice steps—determining the estimating structure and conducting a sensitivity analysis. For the estimating structure, the guide only partially describes how to develop this structure, which primarily involves establishing an adequate WBS. For example, we found that the guide describes the need for a WBS, but it does not describe the need for a product-oriented WBS and does not provide enough information on how to create a WBS. For the sensitivity analysis, the guide only minimally describes the information needed. For example, the guide suggests briefing management on the results of a sensitivity analysis, but it does not address the best practice steps on how to perform a sensitivity analysis. As a result, DOE and NNSA have not provided its contractors with all the detailed guidance needed to consistently develop reliable cost estimates. When we discussed our analysis of this guide and the shortcomings we found in regard to the guide with the DOE officials responsible for developing the guide, they said that they would review our analysis and consider revisions to the guide as appropriate. Appendix I provides a comparison of the extent to which DOE’s cost guide contains information on the 12 best practice steps. In addition to the shortcomings in the description for two of the best practices, the use of the guide is not mandatory and, as a result, neither NNSA staff nor its contractors who develop cost estimates are required to use it. The guide states that it does not impose any new cost estimating requirements or establish departmental policy; it describes only suggested recommended approaches for preparing cost estimates. Further, DOE’s cost estimating guide is not referenced in the order. To supplement DOE’s project management order, DOE has issued a series of guides that provide suggested approaches to meeting the requirements, some of which are specified in the order. There are a total of 18 separate guides, including guides that provide suggested approaches on topics such as systems engineering, earned value management systems, and project closeout. The order references some of these guides as appropriate so that the user of the order is made aware of the existence of the suggested approaches that are included in the guides. However, because the cost guide was issued in 2011, a year after DOE’s latest version of the order was issued, the order contains no reference to this guide that a preparer of a cost estimate could use in developing an estimate, or that a reviewer could use in reviewing one, and users of the order may not be aware of the guide’s availability and may not benefit from its usefulness. Programs Are Not Required to Meet Any Cost Estimating Best Practices DOE and NNSA programs are not required to meet any cost estimating best practices. NNSA officials explained that NNSA cost estimating practices for programs are limited, de-centralized, and inconsistent, and are not governed by a cost estimating policy or single set of NNSA requirements and guidance. According to these officials, each NNSA program office uses different practices and procedures for the development of cost estimates that are included in the NNSA annual budget. In addition, these officials also explained that, while there are no specific requirements on how program cost estimates should be developed or reviewed, all NNSA programs must follow DOE’s budget formulation order and NNSA’s PPBE process, but the order does not require, and the PPBE process does not direct the use of best practices in developing cost estimates. In 2012, we reported on NNSA’s PPBE process, and we noted significant deficiencies in NNSA’s implementation of its PPBE process.ensure the development of fully credible program cost estimates, and that the review of the cost estimates supporting DOE’s budget was not sufficiently thorough to ensure the credibility and reliability of NNSA’s budget estimates. We also found that NNSA was not following DOE’s budget formulation order that requires that budget requests be based on Specifically, we found that the process does not cost estimates that have been thoroughly reviewed and deemed reasonable by the cognizant program organization. NNSA officials told us that they do not follow DOE’s budget formulation order because it expired in 2003. Additionally, according to NNSA officials, NNSA’s trust in its contractors minimizes the need for formal review of its budget estimates. In our 2012 report on the NNSA’s PPBE process we recommended that, among other things, DOE update the departmental order for budget reviews, improve the formal process for reviewing budget estimates, and reinstitute an independent analytical capability to provide senior decision makers with independent program reviews. The agency agreed in principle with these recommendations, but it has not taken action to fully implement them. According to NNSA officials, as of August 2014, NNSA has suspended the budget review process and several options for replacing this process are under review. In the absence of a DOE or NNSA requirement for programs to follow cost estimating best practices, we found that managers for the programs we reviewed—the B61 LEP and the Plutonium Disposition Program— used different processes in developing cost estimates for these programs. Specifically: B61 LEP. NNSA managers for the B61 LEP use the Phase 6.X Process which, among other things, involves the direction to establish a cost and schedule baseline for the program. However, according to the NNSA B61 LEP project manager, since the Phase 6.X Process does not define how cost estimates should be developed, the B61 management team developed an approach for developing cost estimates for the program using various sources, including direction under the Phase 6.X Process, as well as DOE’s project management order and cost guide and our cost-estimating guide. This effort resulted in the management team producing a document that defines the strategy and provides guidance for completing the cost estimate for the B61 LEP. We reviewed this document and found that it does not stipulate that NNSA program managers or its contractors must follow any DOE or NNSA requirements or guidance for the development of a program cost estimate when developing the estimate for the B61 LEP. Plutonium Disposition Program. In February 2014, we found that NNSA’s life-cycle cost estimate for the Plutonium Disposition Program did not follow all key steps for developing high-quality cost estimates in part because it did not have a requirement to develop a life-cycle cost estimate. According to NNSA officials, DOE’s project management order includes requirements for development of cost and schedule estimates for a capital asset project, such as the MOX facility or Waste Solidification Building, but does not specify equivalent requirements for a program like Plutonium Disposition, which includes multiple projects as well as supporting activities. As a result, when developing the life-cycle cost estimate for the Plutonium Disposition Program, NNSA officials used an ad hoc approach to adapt DOE requirements for managing projects in DOE’s project management order. NNSA officials also said that its April 2013 life-cycle cost estimate did not include all the steps of a high-quality, reliable estimate in part because NNSA considered the estimate to be draft and therefore had not fully implemented plans for developing it. We recommended in this report that DOE conduct a root cause analysis of the Plutonium Disposition Program’s cost increases and ensure that future estimates of the program’s life-cycle cost and schedule for the program meet all best practices for reliable estimates. We also recommended that DOE revise its project management order to require life-cycle cost estimates covering the full cost of programs that include both construction projects and other efforts and activities not related to construction. DOE concurred with recommendations to analyze the programs cost increases and revise and update the program life-cycle cost estimate following best practices. DOE did not agree to update its project management order to require life-cycle cost estimates of programs. However, we continue to believe that this recommendation has merit and should be fully implemented. NNSA took some actions to improve independent review and analysis of program estimates. For example, in April 2013, NNSA created the Office of Program Review and Analysis. According to NNSA, this office is intended to improve NNSA’s ability to plan and budget by providing senior leadership independent advice on resource allocations to ensure the best use of the agency’s resources, including evaluating cost estimates of NNSA projects and programs. DOE and NNSA began taking some actions to improve cost estimating. For example, DOE OAPM has embarked on a Cost Estimating and Scheduling Initiative to systematically improve DOE’s policies and guidance. Some of the near term efforts include development of a Life- Cycle Cost handbook, a Key Performance Parameters and Statement of Work handbook, and Analysis of Alternatives guidance. In addition, NNSA’s Office of Defense Programs has taken steps to fill the existing gaps in the cost estimating framework for programs. For example, in 2011, Defense Programs established an Office of Cost Policy and Analysis to provide it with a cost analysis capability. In March 2014, NNSA’s Office of Program Integration issued a cost estimating improvement plan, which includes proposed guidance for conducting cost estimate briefings to the Office of Defense Programs Assistant Deputy Administrator, establishing a defense programs database, and implementing various process improvements to improve cost accounting and performance. While these efforts are intended to improve, among other things, cost estimating practices, none of these efforts include establishing requirements to follow cost estimating best practices. Project Reviews Indicate Cost Estimating Weaknesses and Program Reviews Are Not Required and Therefore the Extent of Weaknesses Is Largely Unknown NNSA’s project and program reviews for fiscal years 2009 to 2014 identified cost estimating weaknesses that can be attributed to not following best practices. DOE and NNSA require reviews of projects, including reviews of cost estimates at various CD points and at the discretion of project managers; however, because DOE and NNSA do not require reviews of program cost estimates, the extent of weaknesses in program cost estimates is largely unknown. Project Reviews Identified Cost Estimating Weaknesses That Can Be Attributed to Not Following Best Practices Of the 50 NNSA project reviews conducted from February 2009 through February 2014, 39 identified a total of 113 cost estimating weaknesses. We determined that 71 of the 113 weaknesses—or about 63 percent— can be attributed to not following 4 of the 12 best practice steps. These four steps are: determining the estimating structure, identifying ground rules and assumptions, conducting risk and uncertainty analysis, and documenting the estimate. For example, as part of this review, we examined NNSA’s 2012 independent review of the Nuclear Material Safeguards and Security Upgrades Project at Los Alamos National Laboratory and it identified three weaknesses, one of which can be attributed to not following the best practice of updating the estimate to reflect actual costs and changes. In particular, the review found deficiencies in the estimate that could lead to an inaccurate cost and schedule estimate for the project. In another example, a 2012 review of NNSA’s Waste Solidification Building identified four weaknesses, including not following the best practice of conducting risk and uncertainty analysis. Several other reviews that we examined found similar weaknesses in project cost estimating. For example, the 2012 Independent Project Review of the MOX Fuel Fabrication Facility at the Savannah River Site identified flaws in the project’s cost estimate that include: (1) basic assumptions concerning project reserves and contingency cost; (2) data reliability of cost information considering the design maturity of the project; (3) the sensitivity analysis; and (4) the project risk analysis. The review also concluded that the most significant risk to the project’s success was NNSA’s under-estimation of the cost to complete the facility. We also found that the remaining 42 weaknesses could be attributed to not following seven other best practice steps. These best practices are developing an estimating plan, defining program characteristics, developing a point estimate and comparing it to an ICE, conducting a sensitivity analysis, presenting the estimate to management for approval, and updating the estimate to reflect actual costs and changes. Appendix II includes the cost estimating weaknesses we identified for each of the project reviews we assessed and the best practice step not followed to which the weakness can be attributed to not following. Program Reviews Are Limited to LEPs So the Extent of Cost Estimating Weaknesses in NNSA Programs Is Largely Unknown DOE and NNSA do not require reviews of programs, including reviews of program cost estimates. As a result, reviews of cost estimates for programs are limited, and the extent to which program cost estimates have weaknesses is largely unknown. While program reviews are not required, we identified and analyzed three LEP program reviews from the beginning of fiscal year 2009 through February 2014, two of which were reviews of the B61 LEP. These identified several weaknesses in the cost estimates for this program that can be attributed to not following three best practice steps: (1) obtaining data, (2) defining program characteristics, and (3) determining the estimating structure. For example, both reviews identified weaknesses in obtaining the data needed to compile the B61 cost estimate. In addition, one of the reviews noted that the estimate was not based on a sound program definition, while the other review stated that a standard WBS was not used to develop the B61 LEP cost estimate. Both reviews concluded that the B61 cost estimate is inaccurate, with one review noting that the program will cost approximately $3.6 billion more than NNSA’s 2011 $6.5 billion Weapon Design and Cost Report estimate. While we did not identify any additional reviews of cost estimates for NNSA programs, we reported in February 2014 on the results of our review of the cost estimates for the Plutonium Disposition Program. We concluded that the life-cycle cost estimate for the overall program was not reliable and found that it did not fully reflect the characteristics of high- quality, reliable estimates as established by best practices. Specifically, in developing its April 2013 draft life-cycle cost estimate of $24.2 billion for the Plutonium Disposition Program, we found that NNSA followed several cost estimating best practices; including obtaining the data, defining the estimate’s purpose, and defining the program’s characteristics. However, we also found that NNSA did not follow other key steps such as conducting an independent cost estimate and, as a result, the estimate was not reliable. We recommended, among other things, that NNSA revise and update the program’s life-cycle estimate using cost estimating best practices, including conducting an independent cost estimate. NNSA agreed with this recommendation. Without a requirement for conducting program reviews, NNSA does not have the appropriate internal controls necessary for assessing program performance. According to the Standards for Internal Control in the Federal Government, federal agencies are to employ internal control activities, such as functional reviews by management of projects and programs to compare actual performance to planned or expected results throughout the organization and analyze significant differences. Such reviews are to help ensure that management’s directives are carried out and to determine if agencies are effectively and efficiently using resources. Conclusions DOE and NNSA have taken action in recent years to improve cost- estimating practices, including the corrective actions that were implemented as part of the department’s 2008 root-cause analysis, which DOE later reported as having effectively mitigated most of the root causes of the most significant contract and project management challenges. Nonetheless, NNSA continues to struggle with significant cost overruns on its major projects. Because DOE does not require the use of most of the 12 cost estimating best practices for its projects and programs, it is unlikely that NNSA and its contractors will consistently develop reliable cost estimates. In addition, while DOE has developed a cost estimating guide, it does not fully describe all 12 cost estimating best practices. As a result, DOE and NNSA have not provided its contractors with all the detailed guidance needed to consistently develop reliable cost estimates. Also, because DOE Order 413.3B has not been updated since 2010, it omits reference to the supplemental cost estimating guide; users of the order may not be aware of the guide’s availability and may not benefit from its usefulness. Finally, without a requirement for conducting reviews of programs with project-like characteristics, including the life-cycle cost estimates of these programs, neither DOE nor NNSA have appropriate internal controls to assess the quality of program performance over time. Recommendations for Executive Action To enhance NNSA’s ability to develop reliable cost estimates for its projects and for its programs that have project-like characteristics, we recommend the Secretary of Energy take the following five actions: Revise DOE’s project management order to require that DOE, NNSA, and its contractors develop cost estimates in accordance with the 12 cost estimating best practices. Revise DOE’s cost estimating guide so that it fully reflects the 12 cost estimating best practices. Revise DOE’s project management order to include references to the DOE cost estimating guide, where applicable. Revise DOE directives that apply to programs to require that DOE and NNSA and its contractors develop cost estimates in accordance with the 12 cost estimating best practices, including developing life-cycle cost estimates for programs. Revise DOE requirements and guidance that apply to programs to ensure that program reviews are conducted periodically, including reviews of the life-cycle cost estimates for programs. Agency Comments We provided DOE with a draft of this report for its review and comment. In its written comments, reproduced in appendix III, DOE agreed with the report’s recommendations. In regard to the report’s first recommendation—revise DOE’s project management order to require that DOE, NNSA, and its contractors develop cost estimates in accordance with the 12 cost estimating best practices—DOE stated in its written comments that DOE’s order for project management (DOE O 413.3B) will be assessed for revision following the issuance of the revision to DOE-STD-1189, Integration of Safety into the Design Process, currently scheduled for November 2016. DOE stated that it has established a log of issues to be addressed when revising Order 413.3B, which includes this recommendation. DOE also stated that this recommendation will be fully considered during this revision process. In the interim, DOE stated that its cost estimating guide incorporates the 12 cost estimating best practices, albeit not in the same format as our guidance. DOE also stated that it has begun internal efforts for the publication of a Departmental Cost Estimating and Schedule Policy and that the policy will complement existing cost estimating guidance and will incorporate our cost estimating guidance. DOE stated that the time frame for this policy to be issued is still to be determined. Further, DOE stated that DOE’s Office of Acquisition and Project Management will continue to incorporate the 12 cost estimating best practices in its independent cost estimating activities. Additionally, DOE explained that the curriculum of the Project Management Career Development Program requires cost and schedule estimating courses and that this training incorporates the 12 cost estimating best practices. We are pleased that DOE agreed with our recommendation and that it has interim measures to improve cost estimating before it plans to implement the recommendation. However, while these may be useful interim measures, the unspecified, open-ended date for updating the project management order that contains requirements (i.e., sometime after November 2016) and the statement that the cost estimating best practices will be considered, not incorporated, may indicate DOE’s lack of urgency or concern about the need to implement this recommendation. In regard to the report’s second recommendation—revise DOE’s cost estimating guide so that it fully reflects the 12 cost estimating best practices—DOE stated in its written comments that it will begin updating the cost estimating guide in the first quarter of fiscal year 2015. The update efforts are to elevate the significance of the 12 cost estimating best practices within the content of the guide, although DOE did not specify when it might complete this update. We are pleased that DOE agreed with our recommendation and that it plans to take action during the current quarter of this fiscal year. However, we are concerned that in its written comments DOE did not specify whether it plans to revise the guide to better include the two areas in the guide we found were deficient and that the lack of a completion date may indicate DOE’s lack of urgency or concern about the need to implement this recommendation. In regard to the report’s third recommendation—revise DOE’s project management order to include references to the DOE cost estimating guide, where applicable—DOE stated in its written comments that its project management order will be assessed for revision following the issuance of the revision to DOE-STD-1189, currently scheduled for November 2016, and that this recommendation will be fully considered during the order revision process. In the interim, DOE stated that its directives website points to the guides that accompany Order 413.3B as the best practices guidance for implementation of Order 413.3B requirements. We are pleased that DOE agreed with our recommendation and that its website includes links to the guides that accompany the project management order. However, our report noted the deficiencies associated with the existing cost estimating guide, and DOE did not specify plans for addressing those deficiencies. In addition, we are concerned that the unspecified, open-ended date for updating the project management order that contains requirements (i.e., sometime after November 2016) and the statement that referencing the cost estimating guide will be considered, not completed, may indicate DOE’s lack of urgency or concern about the need to implement this recommendation. In regard to the report’s fourth recommendation—revise DOE directives that apply to programs to require that DOE and NNSA and its contractors develop cost estimates in accordance with the 12 cost estimating best practices, including developing lifecycle cost estimates for programs— DOE stated in its written comments that it is in the process of substantially revising the existing 1995 DOE Order 130.1, Budget Formulation, and that as part of this effort, DOE will assess the requirement for program cost estimates and will revise the order to provide more specificity on the cost estimating requirements. Further, DOE stated that the revised order will (1) define which DOE and NNSA program budget requests require cost estimates and (2) clarify that cost estimates for program budget submissions shall be conducted in accordance with the DOE cost estimating guide (or its successor policy). DOE estimated that it would complete this process in September 2016. We are pleased that DOE agreed with our recommendation. In regard to the report’s fifth recommendation—revise DOE requirements and guidance that apply to programs to ensure that program reviews are conducted periodically, including reviews of the lifecycle cost estimates for programs—DOE again stated that it is in the process of substantially revising the existing DOE Order 130.1, and that as part of this effort, the department will assess requirements for program reviews and the linkage between program reviews and the budget formulation process. Further, DOE stated that the revised order will clarify requirements for program reviews and specify how such reviews—to include lifecycle cost estimates—can best support the budget formulation process. NNSA is to review these requirements and adjust NNSA-specific policies and guidance as appropriate. DOE estimated that it would complete this process in September 2016. We are pleased that DOE agreed with our recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Energy; the Director, Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Comparison of DOE Cost Guide to GAO’s Best Practices for Cost Estimating Appendix I: Comparison of DOE Cost Guide to GAO’s Best Practices for Cost Estimating 12 key steps/best practices Define estimate’s purpose, scope, and schedule. Determine the estimating approach—work breakdown structure (WBS). Identify the ground rules and assumptions. Develop the point estimate and compare it to an independent cost estimate. Update the estimate to reflect actual costs and changes. Appendix II: Cost Estimating Weaknesses Identified in All Completed NNSA Project Reviews February 2009 - February 2014 and GAO’s Best Practices for Cost Estimating GAO 12 steps (see table legend) Legend: GAO 12 Best Practice Cost Estimating Steps Step Description 1. Summary of associated tasks Determine purpose, scope, required level of detail of estimate, as well as who will receive estimate. Determine cost estimating team, schedule, and outline tasks in writing. Identify technical characteristics of planned investment, quality of data needed, and plan for documenting and updating information. Define the elements of the cost estimate, including best method for estimating costs and potential cross-checks, and standardized structure. Define what the estimate will include and exclude, key assumptions (such as life-cycle of investment), schedule or budget constraints, and other elements that affect estimate. Assumptions should be measurable, specific, and consistent with historical data. Assumptions should be based on expert and technical judgment. Create data collection plan, identify sources, collect valid and useful data, analyze data for cost drivers and other factors, and assess data for reliability and accuracy. Develop a point estimate and compare it to an independent cost estimate. Develop cost estimation model and calculate estimate, in constant dollars for investments that occur over multiple years, and other cross checks and validation, and compare estimate to an independent estimate and previous estimates. Update as more data are available. Test the sensitivity of cost elements to changes in input values, ground rules, and assumptions. Step Description 9. Summary of associated tasks Determine which cost elements pose technical, cost, or schedule risks; analyze those risks; and recommend a plan to track and mitigate risks. A range of potential costs, based on risks and uncertainties, should be identified around a point estimate. Document all steps used to develop the estimate so it can be recreated, describing methodology, data, assumptions, and results of risk, uncertainty, and sensitivity analysis. Develop briefing on results, including information on estimation methods and risks, making content clear and complete so those unfamiliar with analysis can comprehend estimate and have confidence in it. As technical aspects of project change, the complete cost estimate should be regularly updated and, as project moves forward, cost and schedule estimates should be tracked. Appendix III: Comments from the Department of Energy Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Daniel Feehan, Assistant Director; Antoinette Capaccio; Jennifer Echard; Mark Braza; Mike Meleady; Alison O’Neill; and Peter Ruedel made key contributions to this report.
NNSA is responsible for the nation's nuclear security programs. These programs often include the design and construction of large projects to meet program needs. NNSA has a history of struggling to complete these and other projects and programs within cost estimates. Senate Report 112-73 mandated that GAO report on NNSA's cost estimating practices. This report examines: (1) the extent to which DOE and NNSA cost estimating requirements and guidance for projects and programs reflect best practices and (2) the extent to which recent NNSA project and program reviews identified cost estimating weaknesses, and the extent to which the weaknesses can be attributed to not following best practices. GAO reviewed DOE and NNSA cost estimating practices and compared them to best practices, NNSA project and program reviews, and two programs selected among the largest and most complex NNSA programs. GAO also interviewed DOE and NNSA officials about requirements and guidance for cost estimates. The Department of Energy's (DOE) and its National Nuclear Security Administration's (NNSA) cost estimating requirements and guidance for projects and programs do not fully reflect best practices for developing cost estimates. In regard to cost estimating requirements for projects, DOE's 2010 project management order requires 1 of the 12 cost estimating best practices—conducting an independent cost estimate—for larger projects at certain stages of development. In contrast, DOE's 2011 cost estimating guide describes recommended approaches for using 10 of 12 best practices and partially contains information about the other 2. Furthermore, because DOE's cost estimating guide was issued in 2011—after DOE's 2010 project order was issued—it is not referenced in the order. As a result, users of the order may not be aware of the guide's availability and may not benefit from its usefulness. In addition, although NNSA programs are required to follow DOE's budget formulation order and NNSA's budget process, both of which require the development of cost estimates, neither the order nor the process requires the use of best practices in developing the estimates. In February 2014, for example, GAO found that NNSA's lifecycle cost estimate for the Plutonium Disposition Program did not follow all key steps for developing high-quality cost estimates, in part because the agency did not have a requirement to develop a life-cycle cost estimate. In the absence of a requirement for using best practices, it is unlikely that DOE, NNSA, and their contractors will consistently develop reliable cost estimates. NNSA's project and program reviews issued during fiscal year 2009 through March 2014 identified cost estimating weaknesses that can be attributed to not following best practices. DOE and NNSA require independent project reviews, including reviews of cost estimates at certain stages of development and at the discretion of project managers. Of the 50 reviews GAO analyzed, 39 identified a total of 113 cost estimating weaknesses. GAO determined that 71 of the 113 weaknesses—or about 63 percent—can be attributed to not following four best practices: (1) determining the estimating structure, (2) identifying ground rules and assumptions, (3) conducting risk and uncertainty analysis, and (4) documenting the estimate. Neither DOE nor NNSA, however, requires reviews of program cost estimates. Of the three program reviews conducted during fiscal years 2009 to 2013, two were of the B61 Life Extension Program, which is to extend the operational life of this nuclear weapon. Both reviews identified weaknesses in the cost estimates that can be attributed to not following three best practices: (1) determining the estimating structure, (2) defining program characteristics, and (3) obtaining data. In addition, a February 2014 GAO report on NNSA's program to dispose of weapons-grade plutonium found that NNSA did not follow several cost estimating best practices, such as conducting an independent cost estimate and as a result, the program cost estimate was not reliable. While the program reviews and GAO's February 2014 report indicate weaknesses in a few program cost estimates, the extent of program cost estimate weaknesses is largely unknown because neither DOE nor NNSA requires reviews of program cost estimates. Without a requirement for conducting independent program reviews, NNSA does not have the internal control necessary for assessing program performance over time.
Background Technical testing of Army aviation systems, such as helicopters, and related support equipment is the responsibility of the Test and Evaluation Command (TECOM), under the U.S. Army Materiel Command. Since 1990, TECOM has maintained three principal aviation testing sites. The Aviation Technical Test Center (ATTC) at Fort Rucker is the primary site for testing aviation systems and support equipment. The Airworthiness Qualification Test Directorate at Edwards Air Force Base is the primary site for airworthiness qualification testing. Yuma Proving Ground tests aircraft armaments and sensor systems. The principal customers for TECOM’s aviation testing are the aviation program managers who purchase this equipment for the Army and are currently headquartered at the Aviation and Troop Command (ATCOM), St. Louis, Missouri. Significant reductions in funding, personnel, and test workloads in recent years, as well as projections for continued reductions as part of overall defense downsizing, drove TECOM in 1992 to examine options for reducing its testing infrastructure. Internal TECOM studies resulted in a recommendation ultimately endorsed by the Army’s Vice Chief of Staff in late 1993 to consolidate all three Army aviation technical testing organizations at Yuma Proving Ground. TECOM’s proposal was reinforced by the results of a separate study sponsored by ATCOM and completed in December 1993. The 1995 base realignment and closure (BRAC) process also looked at testing facilities from a Defense-wide perspective. That process identified options for consolidating Army testing at a single-site as well as an option for eliminating greater excess testing capacity by consolidating aviation testing across service lines. Consolidation or cross-servicing of common support functions such as test and evaluation activities proved very contentious among the services in BRAC 1995 and produced limited results. None of the aviation testing options were adopted as part of the BRAC process. However, Army BRAC officials indicated to our staff in January 1995 that a consolidation of its aviation testing was planned outside the BRAC process. While awaiting formal approval of the single-site consolidation at Yuma, in the spring 1995, the Army Secretary’s staff updated TECOM’s cost and savings analyses of two options: the single-site at Yuma and a dual-site at Fort Rucker and Yuma. On June 29, 1995, the Secretary tentatively approved the dual-site option because the analyses showed that greater short-term savings could be achieved with that option. Adjustments to Army Data Needed to Fully Account for Projected Consolidation Savings Because TECOM analysts considered only the impacts on TECOM’s budget, they did not fully account for projected savings in operating costs, particularly in the personnel area. Also, some adjustments were needed in the methodology for and calculations of recurring costs and savings involving base operations, real property maintenance, and aircraft maintenance to obtain a more complete picture of relative costs and savings among the competing locations and the time required to offset implementation costs. (See app. II for a discussion of adjustments.) Table 1 shows the Army’s projected one-time implementation costs; annual recurring savings; and the time it takes, from the year consolidation begins, for savings to begin to exceed costs from each consolidation option. Table 2 shows the same information based on our adjustments to the Army’s data. As table 2 shows, the adjusted data indicates higher annual recurring operating savings from each option. Recurring savings remain the greatest from the Yuma single-site option, but the offsetting of implementation costs (including military construction) still takes longer with this option than with the other two options. Long-Term Savings Like the Army, we projected savings from the consolidation options over a 20-year period, following the approach used by DOD in its base realignment and closure process. The Army discounted long-term savings at a 2.75 percent rate—the same rate it used in conjunction with its 1995 base realignment and closure analysis. However, as noted in our report on the 1995 BRAC process, the current Office of Management and Budget approved discount rate of 4.85 percent would have been more appropriate for the 1995 BRAC process. Table 3 shows the projected net present values of the savings for each option using the Army’s cost data and the 2.75 percent discount rate. Table 4 shows our adjustments to the Army’s data, including use of the 4.85 percent discount rate. As tables 3 and 4 show, the Fort Rucker/Yuma dual-site option offers the Army the greatest short-term savings, which the Army considers important in today’s constrained budget environment. The adjusted data show that both the Fort Rucker/Yuma dual-site and Yuma single-site options have long-term savings that are much greater than those for the Edwards/Yuma dual-site option. The 20-year cost savings for the Yuma single-site option are at least comparable to, and possibly greater than, the Fort Rucker/Yuma dual-site option. Under the least savings case shown, for those two options, there would be about a $1 million difference in projected long-term savings between the two options—a difference that could be eliminated with a reduction of about $100,000 in annual operating costs for the Yuma single-site option. The costs and savings from the Yuma single-site option are based on the premise that required military construction would be completed before the consolidation. Completing the military construction after the consolidation would result in increased operating costs and reduced savings. Other Cost and Savings Issues Neither we nor the Army included several factors in cost and savings calculations because they were not easily quantified and because no consensus could be reached on what those costs and savings should be. According to officials at Edwards, movement of the base’s testing operation to Fort Rucker could result in significant recurring costs to transport test aircraft and personnel to distant ranges, such as Yuma, to complete necessary testing operations. An Army aviation official at Edwards estimated these costs could be about $400,000 per year, based on prior tests conducted at Edwards. Another estimate from a Yuma official, based on an evaluation of future testing of the new Comanche aircraft, suggested that additional transportation costs could run as high as $1 million annually. Fort Rucker officials, while acknowledging that transportation costs could increase, believe that the actual costs would not be as high as projected. A number of factors made it difficult for us to identify the most likely costs. First, prior tests are not necessarily indicative of future testing requirements. Second, Army testers already use multiple sites around the United States for various tests—sites other than the three discussed in this report. Third, Fort Rucker officials indicted they would likely seek testing sites closer to Fort Rucker if the consolidation plan is enacted. Thus, while we believe that additional transportation costs are likely with the Fort Rucker/Yuma option, it is not clear what those costs would be. Officials at Fort Rucker noted that it has a contractor-operated mini-depot repair capability to maintain the large number of aircraft associated with its aviation school. Documentation showed that the aviation test center can use this capability, particularly the electronic equipment test facility, to achieve significant savings in time and dollars over the costs of repair at a regular depot facility. Center officials estimated 1-year savings of about $1.9 million through the use of this contract. Army testing officials at Yuma and Edwards agreed that this mini-depot does provide an advantage to aviation testing at Fort Rucker. However, our other reviews of depot operations have shown that the services have excess depot capacity, which increases customer costs. At the same time, to the extent to which the practices of the mini-depot at Fort Rucker minimize customer costs over those at a regular depot, it raises a question why depot maintenance practices should not be modified more broadly so that such savings would not be limited to just Fort Rucker. These variables make it unclear what maintenance savings should be attributed to any testing consolidations involving Fort Rucker. Officials at each of the locations identified additional benefits and synergism from being located with other activities at their respective locations. However, such benefits, while undoubtedly real, were more qualitative in nature and not easily quantified from a cost standpoint or had cost advantages insufficient to affect the relative savings associated with a particular consolidation option. Additionally, other issues such as air space, safety, and weather were raised by officials at selected locations to suggest the relative merits of one location over the other. These also were more qualitative in nature and not easily quantified from a cost standpoint. While various Army officials and Army testing consolidation studies point to Yuma Proving Ground as providing the optimum testing environment for the Army, we found no indication that testing could not be conducted safely at the other locations. Excess Capacity in DOD Testing Infrastructure Signals Need for Consolidations Various studies in recent years, including DOD’s 1995 base realignment and closure review, have concluded there is excess aviation test and evaluation capacity across DOD and have noted the need for reductions in keeping with overall defense downsizing. Likewise, Congress has urged DOD to downsize and consolidate testing activities. However, the services have been unable to agree on how best to achieve such consolidations. During the 1995 BRAC process, a cross-service review group, comprising representatives of each of the services and the Office of the Secretary of Defense, identified several alternatives for the services to consider as they evaluated their bases for potential closure or realignment. One alternative was to shift Army aviation testing from Fort Rucker and Edwards Air Force Base to Yuma. Another option, with greater excess capacity reduction potential across the services, was to consolidate the test and evaluation of air vehicles at a single DOD center at either the Navy’s Patuxent River, Maryland, testing facility or Edwards Air Force Base. Consolidation of Army aviation testing at one of these sites was contingent upon agreement by the Air Force and Navy for consolidation of their aviation testing. However, the services disagreed greatly over how to reduce their excess testing capacity, and little progress was made, particularly in the area of cross-servicing. Congress has also encouraged downsizing, consolidation, and restructuring of the services laboratories and test and evaluation infrastructure, including rotary wing aircraft. Section 277 of the National Defense Authorization Act for Fiscal Year 1996 (P.L. 104-106), requires that the Secretary of Defense, acting through the Test and Evaluation Agent Executive Board of Directors, develop and report to congressional defense committees, by May 1, 1996, a plan to consolidate and restructure DOD’s laboratories and test and evaluation centers by the year 2005. Of more immediate concern to DOD was the Army Secretary’s June 1995 tentative decision to consolidate Army aviation testing at Fort Rucker/Yuma. The Director, Test Systems Engineering and Evaluation, in the Office of the Under Secretary of Defense for Acquisition and Technology, expressed concern that Fort Rucker was not part of DOD’s Major Range and Test Facility Base (MRTFB). He noted in a letter to the Test and Evaluation Executive Agent Board of Directors on September 12, 1995, that there had been a long-standing understanding within the DOD testing community that any consolidation of test and evaluation activities should be at a MRTFB facility unless there was a compelling reason otherwise. He also noted the principle of selecting courses of action that are optimum for DOD rather than for a single program or service. The Army, tasked with responding on behalf of the Board, noted that personnel and budget constraints required the Army to take immediate action to reduce costs in many areas; additionally, the Army noted that it was these economic circumstances, as well as the Army requirement to achieve short- and medium-term budgetary savings, that led to its decision. Several service officials we met with also questioned the selection of a non-MRTFB facility (Fort Rucker) in light of future directions of aviation testing. These officials indicated that advanced helicopter systems are increasingly employing integrated electronics and, as a result, it is important to test the electronics and airworthiness at the same time. Various officials also suggest that it is important to do testing of the aircraft configured with its weapon systems, operating the electronic equipment, and firing the weapons. They also said it is important to do integrated testing to avoid gaps in testing programs. ATCOM’s 1993 study of aviation testing noted that as weapons and electronic warfare equipment become a more integral part of the air vehicle, it is increasingly important that the whole system, not merely its parts, be tested. This suggests the importance of locating testing at a MRTFB facility. Recommendation There is a continuing need to reduce and consolidate excess infrastructure within DOD, including that which exists within the services testing community. Also, the Army has a compelling need to consolidate its aviation testing because of reductions in its workload and continuing reductions in authorized personnel. Consequently, we recommend that the Secretary of Defense, in conjunction with the Test and Evaluation Executive Agent Board of Directors, reexamine the Army’s aviation consolidation plan within the context of its congressionally mandated plan for consolidating laboratories and test and evaluation facilities. Such a reexamination should include a timely determination of whether DOD could reduce excess testing capacity and achieve greater long-term savings Defense-wide through consolidation of Army aviation testing on a cross-service basis and, if so, determining the appropriate locations and action plan for achieving such a consolidation. Agency Comments and Our Evaluation In official oral comments, DOD generally concurred with this report and agreed to examine the Army’s aviation consolidation plan within the context of its congressionally mandated plan for consolidating laboratories and test and evaluation facilities, due to Congress by May 1, 1996. However, DOD also agreed to the Army proceeding with it’s current aviation consolidation plan, but only to the extent that near-term savings can be realized, and holding in abeyance any actions such as construction or other investments that could be lost if far-term consolidation plans differ from the Army’s short-term actions. DOD’s agreement with the Army moving forward with its current consolidation plan raises questions about the extent to which the issue of cross-servicing will be dealt with in the near-term. We continue to believe that a serious examination of the potential for cross-servicing in the test and evaluation arena is warranted. DOD also expressed the view that our adjustments to the Army’s cost and savings analysis, while not affecting the outcome of our review, did result in what it considered an inflated estimate of expected annual savings in our report. Our approach, following methodology employed in the BRAC process, made appropriate and consistent calculations of one-time and long-term costs and savings for each location option; in doing so, we considered costs and savings both to the Army as a whole as well as to the test and evaluation program. We believe that this is an appropriate approach to fully account for expected costs and savings. Our scope and methodology are discussed in appendix I. Unless you announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to the Chairmen, Senate Committee on Armed Services; Subcommittee on Defense, Senate Committee on Appropriations; House Committee on National Security; and Subcommittee on National Security, House Committee on Appropriations; the Director, Office of Management and Budget; and the Secretaries of Defense and the Army. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Barry W. Holman, Assistant Director; Raymond C. Cooksey, Evaluator-in-Charge; and David F. Combs, Senior Evaluator. Scope and Methodology We obtained and reviewed various studies completed by the Army’s Test and Evaluation Command (TECOM) and others pertaining to the consolidation of aviation test facilities. Discussions were held with pertinent officials at the Department of the Army headquarters; TECOM headquarters at Aberdeen Proving Ground, Maryland; and TECOM test sites at Yuma Proving Ground, Arizona; Edwards Air Force Base, California; and Fort Rucker, Alabama. We obtained and analyzed various data at each of these locations to assess the completeness and reasonableness of the data included in the Army’s consolidation studies and data used by the Secretary of the Army in making his June 1995 tentative decision to consolidate testing and two sites. We did not attempt to develop budget quality data, but focused on the adequacy of data to provide relative comparisons among competing locations. Because we had concerns about the comparability of private sector wage data used by the Army in projecting aircraft maintenance costs, we obtained current Department of Labor wage rate data to provide another basis for comparing potential costs. In assessing projected costs and savings for each consolidation option, we also performed selected sensitivity analyses to determine how changes in some data elements would affect the relative costs and savings of each location. To broaden our perspective on aviation test and evaluation issues and future requirements, we held discussions with key testing officials in the Office of the Secretary of Defense, the Army’s Aviation and Troop Command, the Air Force Flight Test Center at Edwards Air Force Base, and the Naval Air Warfare Center at Patuxent River, Maryland. Additionally, we reviewed pertinent documentation and analyses from the 1995 base realignment and closure process. We conducted our work between August 1995 and January 1996 in accordance with generally accepted government auditing standards. Adjustments to the Army’s Cost and Savings Data We made adjustments to the Army’s costs and savings data to obtain a more complete picture of expected savings from consolidated testing activities. We factored in savings in two areas not fully reflected in the Army’s analysis. The first involved the fact that TECOM had claimed only the savings proportional to its direct funding. Approximately 40 percent of TECOM’s budget involves direct funding; the remainder is derived from customer billings. We, therefore, adjusted the savings upward to more fully account for total Army savings. The second area involved savings attributable to reductions in military personnel that would occur as a direct result of the consolidations. TECOM’s written organizational concept outlining plans for consolidation cited specific expected reductions in military personnel because of consolidation. It had not included these savings in its analysis; we added them in. These changes produced significant increases in projected annual recurring and long-term savings to the Army. We made some adjustments to the Army’s calculations of base operating support and real property maintenance services. Cost comparisons for this area had proven problematic for the Army, since the Aviation Technical Test Center was not billed for these services at Fort Rucker. Therefore, TECOM opted to develop average base operating and real property management costs based on actual costs at Fort Rucker and Edwards Air Force Base and apply that average to all three locations. TECOM officials did not have actual cost data for Yuma. We used the Army’s data for Fort Rucker and Edwards to assess the impact on base operating costs for the various consolidation options. The effect was some decrease in projected savings from a consolidation at Edwards Air Force Base and increase in savings at Fort Rucker. Because comparable base operating cost data were not readily available for Yuma, and assuming that actual base operating costs at Yuma would likely be somewhere between those at Fort Rucker and Edwards, we applied an average cost figure to base operating costs at Yuma. The effect on the Yuma option was negligible. We recognized a concern expressed by the Edwards community that actual Army/TECOM reimbursements to the Air Force for base operations were about $400,000 less than those included in the Army’s analysis. A counter, according to TECOM officials, is that the Aviation Technical Test Center is not directly billed for any base operating support costs at Fort Rucker. Absent time for a more detailed assessment of base operating costs at each of the locations, we considered the Army’s methodology, with adjustments as noted above, to represent a reasonable approach for comparing such costs. Nevertheless, we conducted a sensitivity analysis, reducing base operating costs at Edwards by $400,000 to determine the impact on recurring savings at Fort Rucker and found that the relative cost advantage of each competing location remained unchanged. In reviewing contracted aircraft maintenance cost estimates, we found broad differences in estimates of labor costs at the three locations. The Army’s most recent study had used a wage differential of 5.7 percent between Fort Rucker and Yuma, based on actual experience at the two locations. However, it used a wage difference of 19 percent between Fort Rucker and Edwards Air Force Base, based on federal wage grade tables. The study assumed the work, if moved to Edwards, would be contracted out. Most recent Department of Labor wage rate data for aircraft mechanics showed the differences between Fort Rucker and Yuma and between Fort Rucker and Edwards Air Force Base, to be 28.2 percent and 25.8 percent, respectively. While Department of Labor wage rates provide a uniform basis for comparison, various Army officials have expressed concern that actual costs at the time a contract would be negotiated would be somewhat less than indicated by the Department of Labor data. For uniformity in comparing differences among the three locations, we chose to adjust the Army’s data to reflect current Department of Labor wage differences among the three locations. However, assuming that actual costs could likely fall somewhere between the two approaches, our adjusted data on savings show a range of savings to reflect each approach. The low end, with smaller recurring savings, are based on Department of Labor wage differentials. Our adjustments to the Army’s data affected various cost and savings data elements. For example, the aircraft maintenance adjustments had the effect of increasing projected annual operating costs at Yuma and Edwards relative to Fort Rucker and reducing projected long-term savings at those locations. Also, while Yuma, as a single-site option, had greater savings in personnel costs, Yuma’s aggregate savings were diminished by higher projected contract maintenance costs attributed to differences in area wage rates. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Secretary of the Army's tentative decision to move aviation testing activities now at Edwards Air Force Base, California, to Fort Rucker, Alabama, and retain Yuma Proving Ground. GAO found that: (1) the Army failed to fully account for savings from consolidating its aviation testing activities; (2) consolidation at Fort Rucker and Yuma Proving Ground will result in the greatest short-term and significant long-term savings; (3) single-site consolidation at Yuma will result in the greatest long-term savings and an optimum testing environment for future testing; (4) the Department of Defense (DOD) and the services have not reached consensus about how best to consolidate and downsize test activities; (5) excess aviation testing capacity within DOD signals that consolidation is necessary to reduce this excess; and (6) the Secretary of Defense will need stronger commitment and leadership to evaluate whether these options or other options will serve DOD best.
Background Wasteful year-end spending can occur when agencies rush to use funds at the end of the fiscal year. This is often an attempt to spend funds that would otherwise expire, meaning they would no longer be available for new obligations after the fiscal year ends. In its 1980 report, the Subcommittee recognized that higher fourth quarter obligations may not indicate a problem with wasteful spending. The Subcommittee noted that spending at year-end may be the result of legitimate, planned, and worthwhile spending intended by Congress. However, the Subcommittee found numerous examples in which agencies took short cuts in the last few weeks of the fiscal year that led to questionable contracts. Hurry-up procurement practices resulted in the purchase of millions of dollars worth of goods and services for which there was no demonstrated current need. The Subcommittee found that to spend quickly, the government frequently paid inflated prices, incurred higher administrative costs for overtime, and awarded contracts that were not in the government’s best financial interest. At the time the Subcommittee issued its 1980 report, civilian and defense agencies operated under separate procurement systems with different authorities and regulations. Agencies were expected to use competition to the maximum extent practicable, but there was no statutory requirement for the justification and approval of sole-source contracts. Our prior work on year-end spending has shown that problems occurred in the past when budget execution was not monitored effectively. Periodically, Congress has asked that we review and report on agencies’ rates of obligations. A continuing theme of these earlier reports was the questionable quality of the data reported to Treasury and OMB. In our earlier work, we used data published in the quarterly Treasury Bulletin, which was aggregated by department, agency, and object classification, that is, by items of expense. The source of this information was Treasury’s Financial Management Service (FMS) Standard Form (SF) 225 - Report on Obligations. In December 1995, according to Treasury officials, the reporting requirement and the resulting data published in the Treasury Bulletin were eliminated to reduce the reporting burden on agencies. OMB continues to require that agencies report their quarterly obligations on the SF 133 - Report of Budget Execution (SF 133), approximately 20 days after the close of each calendar quarter. Unlike the SF 225, obligations are not shown by object classification. Agencies are also expected to reconcile their year-end SF 133 report with comparable data provided to the Department of the Treasury on the FMS 2108 - Year-End Closing Statement (FMS 2108) and the SF 224 -Statement of Transactions (SF 224). These reports show budget execution data for each appropriation or fund account established by Treasury for a specific period of availability, i.e., annual, multiyear, or without fiscal year limitation. Scope and Methodology To identify reforms in procurement and management practices, we reviewed major legislation enacted since the Subcommittee’s 1980 report was published. Reforms include the Competition in Contracting Act of 1984, the Government Performance and Results Act of 1993, the Federal Acquisition Streamlining Act of 1994, and the Clinger-Cohen Act of 1996. We also interviewed knowledgeable OMB and inspectors general (IG) staffs to ensure that we had a comprehensive view of these reforms, to identify additional administrative efforts, and to obtain multiple perspectives on whether improper year-end spending was a significant problem. To collect current examples of problems in federal contracting, we reviewed our work on federal contract management and IG semiannual reports dated from fiscal years 1995 through 1997 for 10 major departments and agencies. We looked for examples of problem procurements that paralleled concerns identified in the Subcommittee’s report. We were interested in reports that attributed a rush to obligate funds at year-end as a cause for improper contracting practices. We reviewed all IG semiannual reports and selected additional IG reports from fiscal years 1995 through 1997 for the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Housing and Urban Development, the Interior, and Transportation, and for the National Aeronautics and Space Administration (NASA) and General Services Administration (GSA). Additional details on agencies for which we have identified contract management as a high-risk area—Defense, Energy, the Environmental Protection Agency (EPA), and NASA—with corresponding examples from IG reports, are included in appendix I. For data on agencies’ obligation rates, we obtained an automated OMB report containing detailed budget execution information provided by agencies through Treasury’s Government On-Line Accounting Link System (GOALS). Using agency-reported SF 133 year-end obligation data, we calculated quarterly rates of spending for fiscal year 1997, and identified examples of incomplete reporting by agency and bureau. In those cases where fourth quarter cumulative data were missing, we included cumulative data from the most recent quarter. In a second analysis, we compared these data with budget formulation data published in the prior year column of the President’s Fiscal Year 1999 Budget. We did not independently verify the data that agencies provided to OMB. Our work was performed in Washington, D.C., from October 1997 through March 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of the Office of Management and Budget or his designee. On July 14, 1998, the Assistant Director for Budget; the Chief, Budget Concepts Branch of Budget; and their staff provided us with comments, which are discussed in the “Agency Comments and Our Evaluation” section. Potential for Improper Year-End Spending Has Been Constrained Changes in the budget environment and procurement reforms have reduced the potential magnitude of problems with year-end spending. Tight fiscal controls coupled with requirements for full and open competition and advance planning make it less likely that year-end spending will lead to sole-source or unplanned procurements. This is not to suggest that improper year-end spending no longer occurs or that the procurement system cannot be improved further. We have identified contract management as a high-risk area for certain agencies, and IGs continue to find individual contracts that are poorly executed or monitored. Changes in the Budget Environment Affect Year-End Spending Fewer funds, which have been made available for more than 1 year, reduce the opportunity and need to spend funds quickly at year-end for many agencies. Increasingly, federal government spending is made up of direct payments to individuals or grants to states not subject to year-end spending pressures. Correspondingly, funding for agency operating expenses, e.g., costs for federal personnel, equipment, supplies, printing, and contractual services, continues to decline as a share of total spending. As illustrated in figure 1, funding for agency operations has decreased from 48 percent of total gross obligations in fiscal year 1981 to 31 percent in fiscal year 1997. Deficit reduction legislation reinforced this trend by placing annual limitations on the one-third of federal spending that is controlled through the appropriations process—and that includes most government day-to-day operations. At the same time, appropriators have made funds available for more than 1 year. Today, approximately two-thirds of budget accounts on an annual appropriations cycle have some funds available for more than 1 year or available until spent without fiscal year limitation. Agencies have been able to extend some contracts across fiscal years even though their funding is appropriated annually. Recently, the National Defense Authorization Act for Fiscal Year 1998 broadened the authority of the Department of Defense (DOD) to obligate appropriated funds for severable service contracts that cross fiscal years if the contract periods do not exceed 1 year. As part of this provision, Congress has asked that we report on any abuses of the provision, including whether they have occurred in an attempt to circumvent year-end spending limitations.Comparable authority was given to civilian agencies in the Federal Acquisition Streamlining Act of 1994. Procurement Changes Address the Subcommittee’s Concerns OMB officials stated that in their view, improper or unnecessary contracts associated with the rush to spend funds at year-end are far less of a problem than they once were due to open competition requirements, improved agency procurement planning, and fewer available resources. The nine IG officials we contacted shared this view. Out of over 3,200 IG reports reviewed, only 1 report explicitly identified a relationship between poor contracting practices and the need to spend funds quickly at year-end. However, GAO and the IGs continue to find weakness in some agencies’ handling of contract management and with individual procurements. In our High-Risk Series, we identified a number of agencies with poor contract management practices, such as poor planning and inadequate oversight, that make them vulnerable to some of the same problems with wasteful year-end spending that were identified in the Subcommittee’s 1980 report. See appendix I for a summary of our findings and examples taken from IG reports for those agencies for which we identified contract management as a high-risk area. Despite problems with some agencies, the procurement system has undergone significant changes since the Subcommittee’s report and now substantially incorporates the Subcommittee’s recommendations. The Competition in Contracting Act of 1984 (CICA), for example, provided a more common procurement system for defense and civilian agencies, established a “full and open competition” standard more rigorous than the “maximum practicable competition” that preceded it, and included sole-source approval and procurement notice requirements in the procurement statutes. The conference report on CICA suggests that some of the act’s changes to the procurement system were intended to address the year-end spending concerns raised by the Subcommittee. The Defense Acquisition Workforce Improvement Act (DAWIA) addressed many of the Subcommittee’s acquisition personnel-related concerns at DOD by requiring improvements in the qualifications, training, and career development of the defense acquisition workforce. Two more recent acts, the Federal Acquisition Streamlining Act of 1994 (FASA), and the Clinger-Cohen Act of 1996 (CCA) responded to Subcommittee concerns regarding contract personnel performance incentives. Other CCA reforms required comparable qualification and training standards for civilian agencies. To illustrate the way in which changes to the procurement system have addressed the Subcommittee’s concerns, table 1 associates the Subcommittee’s recommendations with descriptions of statutory provisions that implement them in whole or in part. Procurement and management reforms continue to evolve and influence the issue of year-end spending. Two of the most significant procurement reforms were enacted within the last 4 years and other management reforms are in early phases of implementation. As a result, it is too early to assess their full impact or to determine what further refinements may be needed. FASA was intended to simplify the procurement system and CCA added requirements for information technology capital planning and career development and performance incentives for non-DOD acquisition personnel. In addition, management reforms outside of the strictly procurement sphere have influenced the procurement process, particularly procurement planning. The strategic planning provisions of the Government Performance and Results Act of 1993 (Results Act), require integration of capital procurement, budget, and program planning. The National Defense Authorization Act for Fiscal Year 1998 requires expanded use of streamlined micropurchase procedures in DOD. Budget Execution Data Not Reliable Reliable quarterly obligation rates for fiscal year 1997 for the major departments and agencies, as well as the government as a whole, were not available because of incomplete reporting of budget execution data. Additionally, there were significant differences in the three sets of data that agencies reported for fiscal year 1997. Data are reported in (1) final budget execution reports to OMB (SF 133), (2) the prior year column of the President’s Fiscal Year 1999 Budget, and (3) Treasury’s Fiscal Year 1997 Annual Report. OMB told us that the OMB and FMS project to merge these separate year-end reporting requirements will resolve or greatly alleviate the differences in year-end reporting data. However, it is less likely to address problems with quarterly reporting or ensure adequate oversight of budget execution during the fiscal year. Agencies’ failure to report and reconcile budget execution information is another example of the broader financial management concerns we raised in our financial audit of the fiscal year 1997 Consolidated Financial Statements of the United States Government. Rates of Obligation Could Not Be Determined Because agencies did not report complete quarterly budget execution data, we could not determine whether agencies obligated at a higher rate in the fourth quarter than in previous quarters of fiscal year 1997. Our review of OMB-provided agency quarterly budget execution reports (SF 133) showed significant gaps in all major agencies as a result of nonreporting. Of the 1,054 treasury accounts in major department and agencies that we reviewed, 332, or 32 percent, showed no information in the first quarter. Although some programs may not incur obligations until later in the fiscal year, a similar comparison in the last quarter showed that 88, or 8 percent, of the accounts reported no cumulative obligations. Although OMB did not systematically follow up with nonreporting agencies during the year, it did publish a comparison of year-end differences in budget execution and formulation information for fiscal year 1997. In addition to the nonreporting we identified, OMB found 114 accounts—or 10 percent of the accounts published in the President’s Budget Appendix—that were expected to submit budget execution data on SF 133 submissions but did not. We found that three agencies—DOD, the Department of Energy, and the Department of Housing and Urban Development (HUD)—showed quarterly rates of obligations that were particularly misleading because nonreporting (1) was widespread, with Energy and HUD failing to report in two or more quarters for at least half of their total accounts, and (2) included accounts with significant resources. Year-End Budget Execution and Formulation Data Differed Significantly We found significant differences when we compared year-end budget execution obligation data with comparable data reported by agencies in formulating the President’s Fiscal Year 1999 Budget. OMB Circular A-11 requires that agencies report consistent year-end data to Treasury for its Annual Report and to OMB for the final SF 133 - Report on Budget Execution and prior year information for the President’s Budget. We found that of the 14 major departments, 5 reported total fiscal year 1997 obligations that were at least 50 percent higher in the President’s Fiscal Year 1999 Budget than the amounts reported in their respective year-end SF 133s. Of the major departments and agencies, Education, HUD, and NASA each reported total obligations that were over 85 percent higher in the President’s Fiscal Year 1999 Budget, while only DOD, Energy, EPA, and GSA reported essentially the same information to OMB and Treasury. In its report, OMB stated that the absolute value—that is, the combined over-reporting and under-reporting of fiscal year 1997 obligations shown on agencies’ SF 133s compared with actual obligations reported in the President’s Fiscal Year 1999 Budget—was $324 billion, a reporting difference of 15 percent. OMB reached conclusions similar to ours, that (1) data in the actual-year column in the President’s budget request should agree with year-end budget execution data reported to OMB, but did not and (2) governmentwide, SF 133 data were understated when compared to data reported in the President’s Fiscal Year 1999 Budget. According to OMB, governmentwide obligations were understated by a net $152 billion in the final fiscal year 1997 SF 133 reports. A New Data System Is Unlikely to Resolve Quarterly Reporting Problems FACTS II is a new data collection system that according to OMB, will satisfy most of its and FMS’ year-end reporting requirements. Currently, agencies report accounting information, including the FMS 2108 - Year-End Closing Statement, through GOALS, Treasury’s automated reporting system. This system is also used to transmit agencies’ SF 133 reports to OMB, although Treasury does not verify the accuracy or completeness of this information. FACTS II will collect a single set of year-end data from agencies beginning in fiscal year 1999; OMB expects this to improve the link between budget execution data and prior year information in the President’s Budget. Merging separate Treasury and OMB reporting requirements should eliminate discrepancies between budget execution and formulation data for the prior fiscal year because FACTS II will be the only source for this information. However, there is nothing in this change that fosters compliance with quarterly reporting requirements or the oversight of the budget execution process during the fiscal year. Budget Execution Reporting Problems Reflect Broader Financial Management Concerns Agencies’ unreliable reporting and reconciliation of budget execution data mirrors problems with other financial information found in the first audit of the federal government’s consolidated financial statements. For example, we found that government agencies reported hundreds of billions of dollars of assets that were not adequately supported by financial records. Also, several major agencies were not effectively reconciling their fund balances with Treasury accounts. For example, there were billions of dollars of unresolved gross differences between agencies’ and Treasury’s records of cash disbursements as of the end of fiscal year 1997. The accuracy of the appropriation and fund account balances reported on FMS 2108 - Year-End Closing Statements and SF 224 - Statements of Transactions, which are used to prepare the Treasury’s Annual Report, depend on agencies properly reconciling differences reported by Treasury during the year. Each agency will need to consider these reporting and reconciliation problems in order to prepare its Statement of Budgetary Resources and Statement of Financing for its financial statements beginning in fiscal year 1998. Agencies whose financing is wholly or partially from budgetary resources will need to report in these statements on the availability and status of these funds for the reporting period. Since the Statement of Budgetary Resources is budget rather than accrual-based, Statement of Federal Financial Accounting Standards No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, requires that agencies reconcile obligations and outlays reported on the SF 133 with other financial accounting information, which is then included in the agency’s audited financial statements. The Statement of Financing requires that agencies show the relationship between budgetary resources obligated for a federal program entity and its operations, and the net cost of operating that entity by reporting differences and reconciling proprietary and budgetary accounts. OMB has the lead responsibility, in consultation with the Chief Financial Officers Council and others, in developing the form and content of these statements and in ensuring that agencies comply with reporting requirements. Observations Since the Subcommittee’s 1980 report, substantial reforms in procurement planning and competition requirements have changed the environment, as has the declining share of federal funds available for agency operations. Agencies may still be tempted to quickly spend funds that will expire, but year-end spending is unlikely to present the same magnitude of problems and issues as before. Although agencies have the primary responsibility for ensuring that their budgets are executed and accounted for properly, our study revealed that the ability of Congress and OMB to oversee the rate and timing of federal spending across agencies is limited in the absence of complete and accurate reporting. In addition, it points to inadequate central oversight of the financial status of the federal government because of agencies’ widespread reporting noncompliance. Even at year-end, budget execution data reported to OMB and year-end accounting data provided to Treasury do not agree for many agencies. The joint OMB and Treasury proposal to merge year-end reporting requirements through a shared database will eliminate the potential for discrepancies between reports, but by itself does nothing to increase compliance with quarterly reporting requirements or oversight of budget execution during the year. OMB needs to reemphasize the existing OMB Circular A-34 requirement that agencies report budget execution information no later than 20 days after the close of the calendar quarter and investigate agency nonreporting or questionable reporting of quarterly and year-end data. OMB also needs to examine areas in which obligations vary significantly from planned or historical rates to ascertain the reasons for these differences and to monitor agencies’ implementation of their Statements of Budgetary Resources and Statements of Financing, which should provide additional insights. Recommendation To improve oversight of agencies’ execution of the budget, we recommend that the Office of Management and Budget reemphasize compliance with the OMB Circular A-34 requirement that agencies provide quarterly data no later than 20 days after the close of a calendar quarter, and examine quarterly reporting by agencies that varies significantly from planned or historical rates. We also recommend that the Office of Management and Budget continue its efforts to integrate budget and accounting reporting at year-end and report periodically on progress made. Agency Comments and Our Evaluation In oral comments, OMB stated that in the last 3 years it has taken several steps to improve the quality of budget execution data. OMB officials said that they have actively directed a Treasury contractor to build a new SF 133 data collection system that has been used since 1996 to allow OMB to access data directly. Using these data, OMB staff developed reports that present the data in different ways to assist analysis by OMB examiners and agency analysts. In addition, OMB has embarked on a training program and is continuing to provide extensive training within OMB and to the agencies on the value of SF 133 data. OMB’s increased attention to monitoring budget execution data is important and we support its effort to increase the quality and use of this information. OMB’s inclusion of crosswalks in recent budget formulation and execution circulars that show data relationships between year-end reports should be particularly helpful to agencies. Persistent attention, including follow-up by OMB examiners when agencies either do not provide data, do not provide data timely, or when data are questionable, should signal the need for agencies to take budget and financial management reporting and reconciliation requirements seriously. OMB officials also provided clarifying comments, which we have incorporated in the report where appropriate. We are sending copies of this report to other interested Members of Congress and the Director of the Office of Management and Budget. We will make copies available to others on request. Please call me at (202) 512-9573 if you or your staff have any questions. Major contributors to this report are listed in appendix II. High-Risk Contract Management Agencies In 1990, we began reporting on federal program areas that were at risk because of vulnerabilities to waste, fraud, abuse, and mismanagement. We periodically report on agencies’ progress in correcting deficiencies and on where additional actions need to be taken. Our most recent High-Risk Series, published in February 1997, includes high-risk contract management in certain civilian agencies and DOD. Since the problems associated with wasteful year-end spending—poor planning, insufficient competition, and inadequate contract oversight—can occur at any time during the fiscal year, we have included the following summary of our findings regarding high-risk agencies based on our work. We also include some related examples drawn from our reviews of IGs’ reports. We have identified contract management as a high-risk area at DOD, Energy, NASA, and EPA and noted long-standing problems with their contract payment and oversight functions. For example, as noted in our 1997 High-Risk Series, we found that in recent years, DOD experienced numerous problems in making accurate payments to defense contractors. We noted that while DOD had taken steps to address its payment problems, it should also (1) improve and simplify its contract payment system and (2) further strengthen its oversight of contractor cost-estimating systems. Doing so would enable DOD to achieve effective control over contract expenditures. The DOD IG also found examples of overpayment or unreasonable pricing. In fiscal year 1996, the DOD IG reported that overpayments of $43.6 million were made to a contractor because requests for progress payments had not been prepared properly. In another case, the IG found that various defense construction and supply centers had paid $15.8 million more than they should have on 63 procurements of spare parts. We also designated Energy’s contract management as high risk because its extensive reliance on contracting and history of inadequate oversight of contractors failed to protect the federal government’s financial interests. In our 1997 High-Risk Series, we reported that Energy had made progress in developing an extensive array of policies and procedures, such as publishing a new regulation adopting a standard of full and open competition for the award of its management and operating contracts. We concluded that the department would need to continually monitor the award of these contracts to maintain its momentum and priority in implementing contract reform. During 1997 and 1998, the Energy IG reported on problems with the performance-based incentives for fiscal years 1995 and 1996 at four sites. They ranged from incentive payments in excess of the cost of labor and materials for the work performed to the award of incentive fees for work either not completed or for work done prior to establishing the incentive program. In addition, the IG for Energy reported during 1997 on its assessment of the implementation of performance-based incentive contracts. In its report, Energy’s IG raised concerns about insufficient formal guidance for developing and administering performance incentives and the lack of criteria for measuring performance or allocating fees. EPA has had long-standing problems in controlling contractors’ charges, particularly in its Superfund program. In fact, we have repeatedly reported that EPA has not overseen its cost-reimbursable contracts to prevent contractors from overcharging the government. We also found that although EPA had recently strengthened its management and oversight of Superfund contractors, the agency remained too dependent upon contractors’ own cost proposals to establish the price of cost reimbursable work. Thus, we suggested that EPA could better estimate the costs of contractors’ work, use the estimates to negotiate reasonable costs, provide contractors with appropriate incentives to hold down their administrative expenses, and increase the timeliness of contract audits. Although NASA has improved its contract and procurement operations by placing greater emphasis on contract cost control and contractor performance, we and NASA’s IG continue to identify problems in NASA’s contract management and opportunities to improve procurement oversight. For example, NASA’s IG concluded that one NASA-negotiated contract included $22.7 million in financing, insurance interest, and termination liability insurance costs that are generally prohibited under Federal Acquisition Regulations (FAR). In 1997, we suggested that NASA identify its contract management problems early on so they could be evaluated, monitored, and corrected before becoming systemic. We also suggested that additional agencywide guidance could help NASA ensure more consistent and thorough coverage of the procurement cycle. While recent reforms have allowed agencies the option of making small purchases by credit card, a NASA IG survey report entitled NASA Procurement Initiatives, Credit Card Program found that NASA split a $168,000 computer procurement into 80 single purchases, enabling each purchase to fall below the Government Credit Card limit of $2,500. The IG concluded that NASA violated the FAR prohibition against splitting requirements. Similar problems were reported by IGs at Commerce, Energy, and Transportation. Only one of the IG reports we reviewed explicitly identified a relationship between poor contracting practices and the need to spend funds quickly. In its report, Interior’s IG detailed the results of its evaluation of the Bureau of Indian Affairs’ (BIA) road construction projects. The IG reported that some of BIA’s road projects were poorly designed and planned because BIA rushed to award contracts to avoid returning unspent funds to the Federal Highway Administration at the end of the fiscal year. The report concluded that BIA’s practices led to construction delays that increased costs by $3.3 million. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. San Francisco Field Office Office of the General Counsel Related GAO Products U.S. Government Financial Statements: Results of GAO’s Fiscal Year 1997 Audit (GAO/T-AIMD-98-128, April 1, 1998). Executive Guide: Leading Practices in Capital Decision-Making (Exposure Draft) (GAO/AIMD-98-110, April 1998). Financial Audit: 1997 Consolidated Financial Statements of the United States Government (GAO/AIMD-98-127, March 31, 1998). Defense Acquisition: Improved Program Outcomes Are Possible (GAO/T-NSIAD-98-123, March 18, 1998). Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs (GAO/NSIAD-98-87, March 17, 1998). Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD/AIMD-98-122, March 13, 1998). Acquisition Reform: Implementation of Key Aspects of the Federal Streamlining Act of 1994 (GAO/NSIAD-98-81, March 9, 1998). Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment (GAO/NSIAD-98-56, February 24, 1998). Financial Audit: Reconciliation of Fund Balances with Treasury (GAO/AIMD-97-104R, June 24, 1997). Budget Issues: Budgeting for Federal Capital (GAO/AIMD-97-5, November 12, 1996). Information Technology Investment: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, September 30, 1996). Budget and Financial Management: Progress and Agenda for the Future (GAO/T-AIMD-96-80, April 23,1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) actions taken to correct problems with federal yearend spending practices and the award of government contracts; and (2) quarterly obligation data for selected departments and agencies to determine if fourth quarter obligations were higher than obligations in earlier quarters of the fiscal year (FY). GAO noted that: (1) changes in the budget environment and procurement reforms have affected the opportunity and need to obligate funds quickly at yearend; (2) agencies spend far less today than they did in 1980 on providing goods and services directly, as payments to individual beneficiaries and grants to state and local governments have increased; (3) this trend, combined with limits on discretionary spending, has significantly changed the budget environment for most agencies; (4) at the same time, Congress has made funds available for longer periods for many agencies, which reduces the pressure to spend funds at the end of each year; (5) in addition, systemic procurement reforms addressed most of the issues raised in the Subcommittee on Oversight of Government Management, Senate Committee on Governmental Affairs' report although problems persist in certain agencies and with some procurements; (6) GAO's work and that of others indicates that today, there are more safeguards against unplanned yearend spending and, in most discretionary programs, fewer resources available for low-priority purchases than in 1980; (7) despite these changes, it is difficult to assess the patterns of spending during the year because reported quarterly budget execution data are not reliable; (8) without complete and timely information for oversight, the Office of Management and Budget (OMB) and other decisionmakers do not have an accurate assessment of the financial status of federal programs during the year; (9) even at yearend, there are significant differences in three comparable sets of data that agencies report to OMB and the Department of the Treasury; (10) although OMB officials stated that a new system they have built jointly with Treasury to collect yearend data starting in FY 1999 should resolve or greatly alleviate the differences in yearend budget data, more work is needed to assure compliance with the requirement for quarterly data; and (11) agencies' failure to report and reconcile budget execution information mirrors broader financial management problems found in GAO's financial audit of the FY 1997 Consolidated Financial Statements of the United States government.
Background Over the course of the last quarter century, the epidemic has spread to every region of the country. HIV and AIDS cases have been reported in all states, the District of Columbia, and U.S. territories, but the impact of the epidemic varies by region and within states. The South is estimated to have the highest cumulative number of diagnosed AIDS cases, people living with AIDS, and deaths from AIDS. In 2003, 7 of the 10 states with the highest estimated rates of individuals living with HIV were located in the South. The CARE Act was enacted in 1990 to respond to the needs of individuals and families living with HIV or AIDS and to direct federal funding to areas disproportionately affected by the epidemic. Titles I and II of the act provide base funding to affected EMAs and states based on the proportion of each jurisdiction’s caseload of AIDS cases. These titles also establish other types of grants to provide supplemental funding. For example, Title II includes Severe Need grants for states with demonstrated need for supplemental funding to support their ADAPs. Title II also includes funding for emerging communities that are affected by AIDS but do not have the 2,000 AIDS cases reported in the last 5 calendar years in order to be eligible for Title I funding as EMAs. In order to address the impact of the disease on racial and ethnic minorities, Minority AIDS Initiative grants are distributed through both Title I and Title II to EMAs and states. Metropolitan areas heavily affected by HIV or AIDS have always been recognized within the structure of the CARE Act. We previously found that, with combined funding under Title I and Title II, states with EMAs receive more funding per AIDS case than states without EMAs. To adjust for this situation, the 1996 reauthorization instituted a two-part formula for Title II base funding that takes into account the number of AIDS cases that reside within a state but outside of any EMA’s jurisdiction. Under this distribution formula, 80 percent of the Title II base grant is based upon a state’s proportion of all AIDS cases, and twenty percent of the allocation is based on the number of AIDS cases within that state’s borders but outside of EMAs. A second provision included in 1996 protected the eligibility of EMAs. The 1996 CARE Act amendments provided that once a jurisdiction is designated an EMA, that jurisdiction is “grandfathered” so it will always receive some amount of funding under Title I even if its reported number of AIDS cases drops below the threshold for eligibility. Hold-harmless provisions and the grandfather clause were maintained in the 2000 reauthorization of the CARE Act. Table 1 describes selected CARE Act formula grants for Titles I and II. The 2000 reauthorization specified that CARE Act Title I and Title II funding formulas should use HIV case counts as early as fiscal year 2005 if such data were available and deemed “sufficiently accurate and reliable” by the Secretary of Health and Human Services (HHS). The 2000 reauthorization also required that HIV data be used no later than the beginning of fiscal year 2007. In June 2004 the Secretary of HHS determined that HIV data were not yet ready to be used for the purposes of allocating formula funding under Title I and Title II of the CARE Act. The Secretary cited a 2004 Institute of Medicine (IOM) report, which identified several limitations in the ability of states to provide adequate and reliable HIV case counts for use in CARE Act formula allocations. CARE Act Funding Provisions Result in Disproportionate Funding Some CARE Act provisions have led to jurisdictions receiving different amounts of funding per AIDS cases. The counting of AIDS cases within EMAs once to determine Title I funding and once again to determine Title II funding results in states with EMAs receiving more funding per AIDS case than states without an EMA. In addition, Emerging Communities grants are awarded to eligible communities that are separated into two tiers based on each community’s AIDS cases reported in the most recent 5 calendar years. Because one half of the total Emerging Communities grant award is allocated to each tier regardless of the total number of reported AIDS cases in each tier, a disproportionate amount of funding per case was distributed among the grantees in fiscal year 2004. Counting AIDS Cases within EMAs Twice Results in Unequal Funding per Case Across States States with EMAs receive more funding per AIDS case than jurisdictions without EMAs because cases within EMAs are counted twice. The number of AIDS cases used to allocate CARE Act Title I base grants for EMAs is also used in the allocation of 80 percent of Title II base grants for states. The remaining 20 percent is based on the number of AIDS cases in each state outside of any EMA. This 80/20 split was established by the CARE Act’s 1996 amendments to address the fact that states with EMAs received more funding per case than states without EMAs. However, even with the 80/20 split, states with EMAs still receive more funding per AIDS case. States without an EMA receive no funding under the Title I distribution, and thus, when total Title I and Title II CARE Act funds are considered, states with EMAs receive more funding per AIDS case. Appendix I shows the combined fiscal year 2004 funding for all Title I and Title II funding received by each state. Table 2 illustrates the effect of counting EMA cases twice by comparing the relationship between the percentage of a state’s AIDS cases that are within an EMA’s jurisdiction and the amount of funding a state receives per AIDS case. Table 2 shows that as the percentage of a state’s AIDS cases within EMAs increases, the total Title I and II funding per AIDS case also increases for the state. For example, states with no AIDS cases in EMAs received on average $3,592 per AIDS case. States with 75 percent or more of their cases in EMAs received on average $4,955 per AIDS case, or 38 percent more funding than states with no EMA. If the total Title I and Title II funding had been distributed equally per AIDS case among all grantees, each state would have received $4,782 per AIDS case. The impact of counting EMA cases twice is that states with similar numbers of AIDS cases can receive different levels of combined Title I and Title II funding. For example, for fiscal year 2004 funding, Connecticut had 5,363 AIDS cases while South Carolina had 5,563 AIDS cases. However, Connecticut had two EMAs that accounted for 91.3 percent of its cases while South Carolina had none. Connecticut received $26,797,308 ($4,997 per AIDS case) in combined Title I and Title II funding while South Carolina, with 200 more cases, received $20,705,328 ($3,722 per AIDS case). Connecticut received 29 percent more funding than South Carolina, a difference of $6,091,980, or $1,275 per AIDS case. The Tiered Allocation of Title II Funds for Emerging Communities Results in Funding Disparities Among States The two-tiered division of Emerging Communities grants results in disparities in funding per case among states. In addition to the base grants for states, Title II provides a minimum of $10 million in supplemental grants to states for communities with populations greater than 50,000 that have a certain number of AIDS cases in the last 5 calendar years. The funding is equally split so that half the funding is divided among the first tier of communities with 500 to 999 reported cases in the most recent 5 calendar years while the other half is divided among a second tier of communities with 1,000 to 1,999 reported cases in that period. The funding is then allocated within each tier by the proportion of reported cases in the most recent 5 calendar years in each community. In fiscal year 2004, the two-tiered structure of Emerging Communities funding led to large differences in funding per case because the total number of AIDS cases in each tier was not equal. Twenty-nine communities qualified for Emerging Communities grants in fiscal year 2004. Four of these communities had between 1,000 and 1,999 reported cases and 25 communities had between 500 and 999 cases. This meant that 4 communities with a total of 4,754 reported cases split $5 million while 25 communities with a total of 15,994 cases split the remaining $5 million. This resulted in the 4 communities receiving $1,052 per reported case while the other 25 received $313 per reported case. These 4 communities received 236 percent more funding per case than the other 25. If the total $10 million Emerging Communities funding had been distributed equally per case among the communities, each would have received $482 per reported case. Table 3 lists the 29 emerging communities along with their AIDS case counts and funding. Hold-Harmless Provisions and Grandfather Clause Benefit Certain Grantees Titles I and II of the CARE Act both contain provisions that benefit certain grantees by protecting their funding levels. Title I has a hold-harmless provision that guarantees that the Title I base grant allocated to an EMA will be at least as large as a legislated percentage of a previous year’s funding. The Title I hold-harmless provision has primarily benefited one EMA. Title I also contains a grandfather clause that has resulted in a large number of EMAs maintaining funding despite no longer meeting the eligibility criteria. One hold-harmless provision for Title II ensures that the total of Title II and ADAP base grants awarded to a state will be at least as large as the total of these grants it received the previous year. This provision has had little impact thus far, but it has the potential to reduce the amount of funding to states with severe need in ADAPs because it is funded out of amounts reserved for that purpose. The hold-harmless provision and the grandfather clause in Title I and the hold-harmless provisions in Title II protect grantees from decreases in funding from one year to the next, but they also make it more difficult to shift funding in response to geographic movement of the disease. Title I Hold-Harmless Provision Has Primarily Benefited One EMA In fiscal year 2004, the Title I hold-harmless provision primarily benefited the San Francisco EMA. The hold-harmless provision guarantees each EMA a specified percentage, as legislated by the CARE Act, of the base grant it received in a previous year regardless of how much a grantee’s caseload may have decreased in the current year. An EMA’s base funding is determined according to its proportion of AIDS cases. If an EMA qualifies for hold-harmless funding, that amount is added to the base funding and distributed together as the base grant. The San Francisco EMA received $7,358,239 in hold-harmless funding, or 91.6 percent of the hold-harmless funding that was distributed. The second largest beneficiary was Kansas City, which received $134,485, or 1.7 percent of the hold- harmless funding. Table 4 lists the fiscal year 2004 hold-harmless beneficiaries. The funding impact of the hold-harmless provision varies among the EMAs that benefit but it can be substantial. In order to place hold-harmless funding in perspective, it is helpful to consider how much of an EMA’s Title I base grant was made up of hold-harmless funding. EMAs that did not receive hold-harmless funding received approximately $1,221 in base grant funding per AIDS case. Fiscal year 2004 base grant funding per AIDS case in EMAs that received hold-harmless funding ranged from $1,223 (Newark) to $2,241 (San Francisco). Thus, San Francisco received $1,020 more in base grant funding per AIDS case than did EMAs that did not receive hold-harmless funding. This hold-harmless funding represents approximately 46 percent of San Francisco’s base grant. Because of its hold-harmless funding, San Francisco, which had 7,216 AIDS cases in fiscal year 2004, received a base grant equivalent to what an EMA with approximately 13,245 AIDS cases (84 percent more) would have received based on the proportion of cases. Kansas City, the second largest hold- harmless grantee, received about what an EMA with 9 percent more AIDS cases would have received. The San Francisco EMA’s 2004 hold-harmless funding was linked to cumulative AIDS cases used to determine fiscal year 1995 funding. In fiscal year 2004 San Francisco was guaranteed to receive 89 percent of its fiscal year 2000 Title I base grant, but San Francisco’s 2000 allocation was also held harmless under the 1996 CARE Act reauthorization. Under the 1996 reauthorization, EMAs were guaranteed 95 percent of their 1995 base grant in fiscal year 2000. San Francisco was the only EMA to qualify for hold- harmless funding in 2000 because it was the only EMA that would have received less than 95 percent of its fiscal year 1995 base grant. This means that in fiscal year 2004 San Francisco was guaranteed approximately 85 percent of its fiscal year 1995 base grant of $19,126,679. Prior to the 1996 reauthorization, funding was distributed among EMAs on the basis of the cumulative count of diagnosed AIDS cases (that is, all cases reported in an EMA both living and deceased since the beginning of the epidemic in 1981). Because the application of the Title I hold-harmless provision for San Francisco dates back to the 1996 reauthorization, San Francisco’s Title I base grant is determined in part by the number of cumulative cases in the San Francisco EMA as of 1995. Grandfathering Maintains Eligibility for EMAs That No Longer Meet Certain Eligibility Criteria More than one half of the EMAs received Title I funding in fiscal year 2004 even though they were below Title I eligibility thresholds. These EMAs’ eligibility was protected under a CARE Act grandfather clause. Under a grandfather clause established by the 1996 amendments to the CARE Act, once a metropolitan area’s eligibility is established, the area remains eligible for Title I funding even if the number of reported cases in the most recent 5 calendar years drops below the statutory threshold. We found that in fiscal year 2004, 29 of the 51 EMAs did not meet the eligibility thresholds, but their Title I funding was protected by a grandfather clause (see table 5). The number of reported AIDS cases in the most recent 5 calendar years in the 29 EMAs ranged from 223 to 1,941. Title I funding awarded to these 29 EMAs was about $116 million, or approximately 20 percent of the total Title I funding. As discussed earlier, some metropolitan areas are designated as emerging communities because their caseloads are not large enough to make them eligible for Title I funding as EMAs. However, some emerging communities had more reported AIDS cases in the last 5 years than some of the EMAs that have been grandfathered. For example, for fiscal year 2004 Memphis, a designated emerging community, had 1,588 reported AIDS cases during the most recent 5 calendar years, which is more than the number of cases reported in 26 EMAs. This results in variability in funding per case caused by grandfathering EMAs. Title II Hold-Harmless Funding Could Diminish ADAP Severe Need Grants in the Future A Title II hold-harmless provision could diminish ADAP Severe Need grant amounts in the future because the provision and the grants are funded from the same set-aside of funds. If larger amounts are needed to fund the hold-harmless provision in the future, the Severe Need grant states could get less than the grant amounts they would otherwise receive. Fiscal year 2004 was the first time that any states triggered this Title II hold-harmless provision, which was established by the 2000 amendments. Severe Need grants are funded by setting aside three percent of the total CARE Act Title II funding for ADAPs. The Title II hold-harmless provision, also funded by the 3 percent set-aside for Severe Need grants, guarantees that the total of Title II and ADAP base grants made to a state will be at least as large as the grants made the previous year. In fiscal year 2004 eight states became eligible for this hold-harmless funding. To provide these jurisdictions with hold-harmless funding, HRSA officials told us they used funds from the 3 percent set-aside for Severe Need grants. In 2004, the 3 percent set-aside for Severe Need grants was $22.5 million. Of these funds, $1.6 million, or 7 percent, was used to provide this Title II hold-harmless protection. (See table 6.) The remaining $20.8 million, or 93 percent of the set-aside amount, was distributed in Severe Need grants. The potential exists for this Title II hold-harmless provision to diminish the size of Severe Need grants in the future if larger amounts are needed to fund the hold-harmless protections. The total amount of Severe Need grant funds available in fiscal year 2004 to distribute among the eligible states was less than it would have been without the hold-harmless deduction. In fiscal year 2004 not all 25 of the states eligible for Severe Need grants made the required match in order to receive the grant. Consequently, the size of the severe need grants received by each state was not less than what they would have received if all eligible states made the match. In future years, if all of the eligible states make the match, and if there are also states that qualify to receive hold-harmless funds, the Severe Need grant states would get less than the amounts they would have otherwise received. Funding Impact of Using HIV Case Counts Would Depend on the Adequacy of HIV Reporting Systems and the Number of Reported HIV Cases If HIV case counts had been used with AIDS case counts in allocating Title II base funding, about half of the states would have received increased funding and the other half would have received less funding. Under the 2000 CARE Act reauthorization, HIV case counts are required to be included in CARE Act funding formulas no later than fiscal year 2007. While all states have established HIV case reporting systems, there are currently characteristics of these systems that limit the use of HIV case counts in the distribution of CARE Act funds. In order to gauge the funding impact of using the data as they currently exist, we developed two theoretical approaches for doing so. Using these two approaches, we found that some fiscal year 2004 Title II base funding would have shifted to southern states if HIV case counts had been used with AIDS case counts in the distribution of funds. We also found that funding would tend to shift to jurisdictions with older HIV reporting systems, regardless of their location. Changes in funding due to the inclusion of HIV cases would be largely offset, at least initially, if the funding formulas retained hold- harmless and minimum grant provisions. Current HIV Case Reporting Systems Have Limitations for Providing Case Counts for Funding Allocations In its 2004 report, IOM identified several limitations in the ability of states to provide HIV case counts for use in CARE Act funding allocations. Among these limitations, IOM found that the maturity of HIV case reporting systems varies widely across states. The earliest HIV reporting systems were established in Colorado, Minnesota, and Wisconsin in 1985, while five jurisdictions implemented their systems since 2003. Case reporting systems need time to become fully mature and operational, and it takes time to make practitioners aware of the requirement to report new HIV cases and the methods for doing so. Existing cases also need to be reported and entered into the system. States with newer systems may not have collected and entered data on existing cases, and, consequently, may underreport the number of HIV cases in the state. Underreporting of HIV cases could result in jurisdictions receiving less funding than they would be entitled to based on the actual number of HIV and AIDS cases. IOM also found that differences in how states report HIV case counts to CDC could preclude their use in the distribution of CARE Act funds. Some state HIV case reporting systems are name-based while others are code- based. Currently, CDC will only accept name-based case counts. Therefore, state-reported HIV cases that use codes rather than names would not be counted in allocating CARE Act funds, if HIV case counts were used in funding formulas. Twelve states, the District of Columbia, and Philadelphia, PA, have some form of a code-based system rather than a name-based system. CDC does not accept the code-based data principally because methods have not been developed to make certain that a code-reported HIV case is only being counted once across all reporting jurisdictions. Table 7 shows whether state HIV case counts are accepted by CDC and the year in which each state established its HIV reporting system. The Use of HIV Case Counts in Funding Formulas Would Change the Distribution of CARE Act Funds While we are aware of some of the limitations of HIV data, we used two approaches to examine the potential impact of using HIV cases in addition to AIDS cases on fiscal year 2004 Title II base grant distributions. We conducted this analysis in light of the CARE Act requirement that HIV case counts be used for the distribution of Title I and Title II formula grants no later than fiscal year 2007. Some CARE Act fiscal year 2004 funding would have shifted if HIV and AIDS case counts had been used to allocate the funds. Our analyses indicate that at most 14 percent of CARE Act Title II base funding would have shifted, with southern states being the primary beneficiaries. Changes could have resulted from the number of reported HIV cases and AIDS cases in each jurisdiction or differences in state HIV case reporting systems. However, many of the funding changes in our model would have been negated if we had applied hold-harmless and minimum grant provisions. Methodological Approaches Used We used two approaches to examine the impact of using HIV cases in addition to AIDS cases on funding for Title II base grants in the 50 states, the District of Columbia, and Puerto Rico. We chose Title II base grants to illustrate the effect of using HIV case counts in funding formulas. Under the first approach, we used HIV case counts in addition to AIDS case counts for the 36 jurisdictions from which CDC accepted HIV data. We then supplemented these data with only the AIDS case counts CDC received from the other jurisdictions because CDC does not accept their HIV data. Consequently, for some states and metropolitan areas we used HIV and AIDS case counts, but for others we used only AIDS case counts. This approach reflects the data that would be used if funding allocations were based on the HIV and AIDS case counts currently received by CDC. Under the second approach, we used the same HIV and AIDS case counts for the 36 jurisdictions as our first approach, but supplemented these data with the HIV case counts collected by the other 15 states and the District of Columbia from which CDC did not accept HIV data. We obtained these HIV case counts directly from these jurisdictions. For both approaches, we calculated the percentage of cases in each jurisdiction and estimated the fiscal year 2004 Title II base grant that each would have received. Our initial analyses assume that funding was distributed equally per AIDS case and that there were no hold-harmless or minimum grant provisions. We then estimated the impact of the hold-harmless and minimum grant provisions. Although there are limitations associated with each of the approaches, they indicate the general impact of using HIV and AIDS cases to distribute all CARE Act formula funding. Impact on Title II Base Grants Both approaches indicated that there would be some shifting of funds if HIV and AIDS case counts had been used to allocate CARE Act Title II base grants, with southern jurisdictions generally being among the areas that would have received increased funding. Under the first approach— using HIV and AIDS cases from 36 jurisdictions and only AIDS cases from 16 jurisdictions—about 14 percent or $38.9 million of Title II base grants would have shifted among grantees. Twenty-seven grantees would have received additional funding in their Title II base grants if HIV and AIDS cases had been used to allocate funding instead of just AIDS cases. Of the 27 that would have received more funding, 12 were in the South. Jurisdictions outside the South that would have received more funding include Colorado, New Jersey, and Ohio. All 3 would have each received more than $2 million in additional funding. Funding increases would have ranged from less than $50,000 in Iowa to almost $5 million in North Carolina, or from less than 5 to almost 100 percent. Twenty-five grantees would have received less funding. California, Georgia, and Illinois would have received the largest decreases in Title II base grants. Decreases would have ranged from about $100,000 in Idaho and Wyoming to almost $12 million in California. Percentage decreases would have ranged from less than 5 percent in New York to almost 80 percent in Montana. The second approach — including the code-based HIV counts — yields a smaller shift in funding. Under this approach, approximately 10 percent or $28.4 million of fiscal year 2004 Title II base grants would have shifted. Of the 26 grantees that would have received additional funding, 11 are in the South. Funding increases for the 26 grantees that would have received additional funding would have ranged from less than $50,000 in Maine to about $4 million in North Carolina, or from 5 percent in Washington to 80 percent in Colorado. Among the states benefiting from this funding approach, Maryland, North Carolina, and Virginia would each have received increases of more than $2 million. Twenty-six grantees would have received less funding. California, New York, and Georgia, would have received the largest decreases. Decreases would have ranged from less than $50,000 in Iowa to $5 million in California. Percentage decreases would have ranged from less than 5 percent in Florida, Illinois, New Mexico, and Utah to 65 percent in North Dakota. Appendix II shows the results of these analyses for each state. Differences in Case Reporting Systems Would Affect Distributions One explanation for the changes in funding allocations when HIV and AIDS cases are used instead of only AIDS cases is the maturity of state HIV case reporting systems. We found that those states that would benefit from the use of HIV cases tend to be those with the oldest HIV case reporting systems. Those states with the oldest reporting systems include 11 southern states whose HIV reporting systems were implemented prior to 1995. As shown in table 8, states with long histories of collecting HIV case counts tend to have many more HIV cases compared with their number of AIDS cases than do states with less mature reporting systems. This is likely because states with newer systems do not have reports on many cases of HIV diagnosed before their reporting systems were established. This can be illustrated by comparing Wisconsin and Delaware, 2 states with similar numbers of AIDS cases. Wisconsin began reporting HIV cases in 1985 while Delaware began in 2001. As of June 2003, the 909 reported HIV cases in Delaware was about 40 percent less than the 1,518 reported AIDS cases. In Wisconsin, there were about 50 percent more reported HIV cases and AIDS cases, or 2,287 HIV cases and 1,507 AIDS cases. This variability could be reduced as Delaware identifies more preexisting HIV cases. However, the variability between HIV cases and AIDS cases would remain if there was a difference in the actual number of HIV cases. Under either approach, jurisdictions that would receive increased funding allocations because of the use of HIV and AIDS case counts might do so because other jurisdictions did not yet have an accurate measure of HIV case counts. The larger the proportion of HIV cases within the total number of HIV and AIDS cases in a jurisdiction, the more a jurisdiction would benefit from the use of HIV cases in funding allocations. However, this increased funding could simply be the effect of a state’s older reporting system, and not necessarily due to actual differences in the number of HIV cases. IOM has reported that it could take from 18 months to several years after the implementation of an HIV reporting system before there would be valid estimates of the number of people living with HIV. However, table 8 suggests that it could take even longer to get accurate case counts. The data in table 8 suggest that as an HIV case reporting system matures, it will record a higher ratio of HIV cases to AIDS cases. One state official we spoke with said that it could take 5 to 6 years before a reporting system’s HIV case counts were complete. Changes in Funding Would be Limited Initially if Certain Formula Provisions Were Maintained Changes in funding caused by shifting to HIV cases and AIDS cases would be negated, at least initially, if the current hold-harmless or minimum grant amounts were maintained. Consider the situation in which a state received $2 million in its Title II CARE Act base grant award based on its AIDS case count. In the following year, the formula is changed so that HIV and AIDS cases are used to determine funding allocations, and the state is then only entitled to $1 million. However, there is a hold-harmless provision that guarantees the state 98 percent of what it received the previous year. The state would receive 98 percent of its $2 million allocation, or $1.96 million, largely offsetting the reduction in funding due to the shift to HIV and AIDS cases. Minimum award amounts could also affect the impact of using HIV and AIDS counts. If a jurisdiction qualified for $100,000 formula funding using HIV and AIDS case counts, but the minimum award was $500,000, the jurisdiction would not receive less funding because of the change to HIV and AIDS counts. Under our first approach, 5 percent of Title II base grants would shift among grantees if the hold-harmless and minimum grant provisions were maintained while 14 percent would shift if they were not included. Under our second approach, 4 percent would shift instead of 10 percent. California, which would have had large reductions under both approaches if the hold-harmless provision was not maintained, would have had no change in funding under either approach if the current hold-harmless provisions were maintained. Appendix III shows the results of these analyses for each state. State ADAP Eligibility Criteria and Funding Sources Vary Widely Among state ADAP programs, there is wide variation in the eligibility criteria used to determine who is covered for ADAP services and in the funding sources available beyond each state’s Title II ADAP base grant. States have flexibility in determining their ADAP program eligibility standards, including the income eligibility ceilings for ADAP clients, caps on spending per client, and the HIV and AIDS drugs included in their formulary. As a result, an individual eligible for ADAP services in one state may not be eligible in another. There is also wide variability in the additional funding sources that ADAPs may receive to help fund their programs. Beyond each state’s Title II ADAP base grant for providing HIV and AIDS medications and related services, additional ADAP funding sources may include Title II Severe Need grants, non-federal transfers of Title II state or Title I EMA funds, state contributions, and other funding sources. States with waiting lists for ADAP services do not fit any particular pattern of eligibility criteria and funding sources. Eligibility Criteria Contribute to Coverage Differences Among States States set different eligibility criteria for their ADAP programs, so a person with HIV or AIDS at a certain income level and needing medication assistance may be an eligible ADAP client in one state, but not in another. Eligibility also varies among state Medicaid programs, which may provide HIV and AIDS services and drug assistance. The interaction between these two programs can affect which clients are eligible for ADAP services, and many individuals seeking ADAP coverage may not be aware that they are eligible for drug assistance through Medicaid. One eligibility requirement where there is considerable variation among state ADAPs is the client income ceiling. The income ceilings among 52 state ADAPs for fiscal year 2004 ranged from the most restrictive at 125 percent of the federal poverty level, or $11,638, in North Carolina to the most generous at 556 percent, or $51,764, in Massachusetts. Eleven states had eligibility ceilings at 200 percent or less of the poverty level. Another eligibility criterion where there is wide variation among state ADAPs is the number of HIV and AIDS drugs covered under a state program’s drug formulary. The number of drugs included in ADAP formularies in fiscal year 2004 varied widely from Colorado with 20 drugs to four state ADAPs—Massachusetts, New Hampshire, New Jersey, and Washington—with open drug formularies. Thirty-nine ADAPs had 100 or fewer drugs, including 15 with fewer than 50 drugs on their formularies. The CARE Act allows states to purchase health insurance to cover HIV and AIDS drugs for their clients. HRSA requires an ADAP to demonstrate that the insurance includes coverage for drugs comparable to those on the state’s ADAP formulary. Determining whether an individual is eligible for state ADAP or state Medicaid services is important because the ADAPs serve as the individual’s HIV and AIDS drug assistance program of last resort. Medicaid programs provide HIV and AIDS health care services, including medications, to eligible disabled individuals with low incomes. If an individual is eligible for a state’s Medicaid drug assistance, the state ADAP should not provide the same services under its program. Twenty-three ADAPs reported requiring clients to have been denied Medicaid eligibility before the ADAP will cover them. To ensure that a prospective or current ADAP client is not eligible to be served by Medicaid, 42 of the 52 state ADAPs reported in ADAP grant year 2004 that they used a case manager review process to monitor an ADAP client’s Medicaid eligibility, and 40 of the 52 ADAPs also reported using computer access to eligibility determinations to verify a client’s Medicaid and ADAP eligibility. Because it is important to ensure continuing therapy for HIV and AIDS clients once they begin taking medications, states may limit the number of ADAP clients they serve to prevent a budget shortfall. This could result in eligible clients being on an ADAP waiting list. States also use a variety of ADAP eligibility restrictions to limit the number of clients they serve. Of the 52 state ADAPs, 36 reported eligibility restrictions for ADAP grant year 2004, and 20 of the 36 used more than one. The restrictions most used were (1) an annual cap on individual incomes by 20 ADAPs, (2) a limitation on an individual’s assets by 16 ADAPs, (3) capping ADAP enrollment by 7 ADAPs, (4) sliding scale copayments paid by individuals by 7 ADAPs, and (5) capping the amount expended per client for all HIV and AIDS drugs by 6 ADAPs. Appendix IV provides a state-by-state summary of the reported restrictions. A Large Percentage of ADAPs’ Funds Received from Sources Other than the ADAP Base Grant In addition to their Title II ADAP base grants, 46 of the 52 states ADAPs received funding from other sources for their programs in fiscal year 2004. There were five sources of additional funding across these 46 state ADAPs: (1) $20.8 million in Title II Severe Need grants (including $4.5 million in state match funds), (2) $26.9 million from Title II state funding transfers, (3) $10.9 million from Title I EMA funding transfers, (4) $194.8 million in state contributions, and (5) $169.3 million in other funds. When the additional funding source totals are compared among states as a percentage of the ADAP’s CARE Act base grant, and as an amount per AIDS case, there is a significant range among the states. Appendix V provides a state-by-state summary of additional ADAP funding and the base grant and per AIDS case comparisons. State ADAPs that received funding from sources other than their Title II base grant award include Sixteen of the 25 states eligible for ADAP Severe Need grants received grant amounts ranging from about $37,000 in Montana to about $6 million in Texas. States eligible for these grants must agree to match 25 percent of the funds. Eighteen ADAPs reported receiving transfers from their states’ Title II base grants ranging from about $65,000 in Maryland to $12.2 million in California. Nine of the 24 states with EMAs reported receiving Title I fund transfers from their EMAs for their ADAPs ranging from more than $65,000 for Nevada to about $6 million for New York. Thirty-five ADAPs reported receiving state contributions from their states ranging from about $8,000 in Ohio to about $64 million in California. Thirty-two ADAPs reported other funding sources ranging from about $7,000 in Montana to $64.5 million in New York. Other funding sources include additional funds from drug rebates and HRSA approved carryover of ADAP CARE Act funds from one year to the next. Among states with additional funding sources, there is a significant range in amounts per AIDS case and percentages of the ADAP base grants. The highest amount of additional funding received per AIDS case was $3,604, or 171 percent of the base grant in Idaho and the lowest was $61 per AIDS case, or 3 percent of the base grant in the District of Columbia. ADAPs in six states did not receive any additional funding—Iowa, New Hampshire, New Mexico, Tennessee, Utah, and Wyoming. Eligibility Criteria and Funding Sources Also Vary Among States with Waiting Lists During fiscal years 2002 through 2004, some states had people eligible for their ADAPs’ services on waiting lists and the states with ADAP waiting lists have remained relatively static in fiscal years 2002 through 2004. Sixteen, or about one-third, of the 52 states had ADAP waiting lists for at least 1 month during these 3 years. Seven of the 16 states had ADAP waiting lists in all 3 years. (See table 9.) The funding sources and eligibility criteria for states with waiting lists have varied just as considerably as for states without waiting lists, and there is no clear pattern between a state’s funding sources or eligibility criteria and the existence of a waiting list. While 33 states that received additional funds did not have an ADAP waiting list in 2004, 13 of the 14 states with waiting lists also received additional funding beyond their ADAP base grant. For example, for Title II Severe Need grants: Eight of the 16 states that received Severe Need grants had waiting lists. Three of the 9 eligible states that did not apply for Severe Need grants in 2004—Alaska, Iowa, and South Dakota— also had ADAP waiting lists. Title I EMA transfers: One state ADAP of the nine that received a Title I transfer—Colorado—had an ADAP waiting list. Title II state transfers: Eight of the 18 ADAPs receiving Title II transfers had waiting lists. State funds: Nine of the 35 ADAPs that received state funds had waiting lists. Other funding: Of the 32 ADAPs reporting other funding sources, 10 had ADAP waiting lists. Of the 14 states with ADAP waiting lists, 5 were among the top 10 for additional funding per AIDS case received—Idaho (1), South Dakota (2), Oregon (3), North Carolina (7), and Colorado (8). The remaining 9 states with waiting lists and their per AIDS case ranks were Montana (12), Alabama (18), Nebraska (23), Indiana (24), West Virginia (28), Kentucky (33), Arkansas (34), Alaska (42), and Iowa with no additional funds. There also seems to be no clear pattern between eligibility criteria—such as a low income eligibility ceiling or a limited drug formulary—and a waiting list of clients that a state ADAP deems eligible but is unable to serve. For example, for Client income eligibility levels: North Carolina with the most restrictive level at 125 percent of the poverty level had a waiting list, and Massachusetts with the most generous level at 556 percent had no waiting list. Eligibility restrictions: Among the seven ADAPs that capped their ADAP enrollment, six had waiting lists. Five ADAPs that capped the amount they expend per client for all HIV and AIDS drugs included two states with waiting lists. Drug formularies: Among the 39 ADAPs with 100 or fewer drugs on their formularies, 13 had waiting lists. When eligible clients are on state ADAP waiting lists, there are limited medication assistance options available to help them until they can be served by the ADAP. HRSA officials told us that case managers, who are not ADAP employees, are to assist ADAP-eligible clients in accessing options to act as stopgaps until clients can be provided ADAP services. Among the options are pharmaceutical manufacturers’ patient assistance programs that provide free or cost-reduced drugs and non-ADAP pharmacy assistance programs provided by some EMAs using their Title I funds. Concluding Observations The services provided under the Care Act have filled important gaps in communities throughout the country, but as Congress reviews this act, we believe it is important to understand how variable this funding can be. Today I have highlighted a few of the issues that are relevant to this review. For each of these issues, we found that the provisions of the CARE Act have impacted the extent to which funds have been distributed in proportion to the incidence of HIV and AIDS. It is clear that the level of funding available per case is quite variable depending upon where an individual lives. The way cases from EMAs are counted twice, the tiered allocation of funds to Emerging Communities, the hold-harmless provisions, and the grandfathering of EMAs have all resulted in considerably more funding going to some communities than others with equivalent numbers of cases. The inclusion of HIV cases in the funding formulas, while improving on the basis for funding allocations by reflecting cases that have not progressed to AIDS, would also result in variable funding depending upon the type and maturity of the reporting system used in each state. In addition, the flexibility given to states to shift funds, establish eligibility criteria, place limits on the medications covered, and cap enrollment, has resulted in great variability for ADAP services depending upon where an individual lives. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. Contact and Acknowledgments For future contacts regarding this testimony, please call Marcia Crosse at (202) 512-7118. Other individuals who made key contributions include Robert Copeland, Louise Duhamel, Cathy Hamann, James McClyde, Opal Winebrenner, and Craig Winslow. Appendix I: Combined CARE Act Title I and Title II Funding by State, Fiscal Year 2004 Combined Title I and Total Title I and Title II awards per AIDS case State received a Title II base award of $500,000, the minimum it could receive based on the number of AIDS cases in the state. State received a Title II base award of $200,000, the minimum it could receive based on the number of AIDS cases in the state. Appendix II: Estimated Funding Changes Using HIV and AIDS Cases without Hold- Harmless and Minimum Grant Provisions Change in Title II case funding if CDC- Change in Title II base funding if HIV case accepted HIV case counts and AIDS case counts from all states and AIDS case counts counts were used to distribute funding were used to distribute funding State received a Title II base award of $500,000, the minimum it could receive based on the number of AIDS cases in the state. State received a Title II base award of $200,000, the minimum it could receive based on the number of AIDS cases in the state. Appendix III: Estimated Funding Changes Using HIV and AIDS Cases with Hold- Harmless and Minimum Grant Provisions Change in Title II base funding if CDC- Change in Title II base funding if HIV case accepted HIV case counts and AIDS case counts from all states and AIDS case counts counts were used to distribute funding were used to distribute funding State received a Title II base award of $500,000, the minimum it could receive based on the number of AIDS cases in the state. State received a Title II base award of $200,000, the minimum it could receive based on the number of AIDS cases in the state. Appendix IV: ADAP Program Eligibility Restrictions Reported by 52 ADAPs, ADAP Grant Year 2004 Appendix V: Additional ADAP Funding and its Percentage of the CARE Act Title II ADAP Base Grants and per AIDS Case by State Title II non- ADAP base A State was not eligible for a grant. B State did not have an EMA. Related GAO Products Ryan White CARE ACT: Ti e I Funding or San Francisco. f 189R. Washington, D.C.: August 24, 2000. Ryan White CARE Act: Opportunities to Enhance Funding Equity. GAO/T- HEHS-00-150. Washington, D.C.: July 11, 2000. HIV/AIDS: Use of Ryan Whie CARE Act and Other Asssance Grant Funds. GAO/HEHS-00-54. Washington, D.C.: March 1, 2000. HIV/AIDS Drugs: Funding Imp catons o New Combna on Therapes or Federal and State Programs. GAO/HEHS-99-2. Washington, D.C.: October 14, 1998. Revisng Ryan Whie Funding Formulas. GAO/HEHS-96-116R. Washington, D.C.: March 26, 1996. Ryan White CARE Act o 1990: Opporunities to Enhance Funding Equity. GAO/HEHS-96-26. Washington, D.C.: November 13, 1995. Ryan White CARE Act: Access o Services by Mnori es, Women, and i tit Substance Abusers. GAO/T-HEHS-95-112. Washington, D.C.: July 17, 1995. Ryan White CARE Act o 1990: Opporunites Are Avaiabe to Improve Funding Equiy. GAO/T-HEHS-95-126. Washington, D.C.: April 5, 1995. Folowup on Ryan Wh e Testmony. GAO/HEHS-95-119R. Washington, it D.C.: March 31, 14, 1995. Ryan White CARE ACT of 1990: Opportun ies Are Avaiabe o Improve Funding Equiy. GAO/T-HEHS-95-91. Washington, D.C.: February 22, 1995. Ryan White Funding Formulas. GAO/HEHS-95-79R. Washington, D.C.: February 14, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Ryan White Comprehensive AIDS Resources Emergency Act (CARE Act) was enacted in 1990 to respond to the needs of individuals and families living with the Human Immunodeficiency Virus (HIV) or Acquired Immunodeficiency Syndrome (AIDS). In fiscal year 2004, over $2 billion in funding was provided through the CARE Act, the majority of which was distributed through Title I grants to eligible metropolitan areas (EMA) and Title II grants to states, the District of Columbia, and territories. Titles I and II use formulas to distribute grants according to a jurisdiction's reported count of AIDS cases. Title II includes grants for state-administered AIDS Drug Assistance Programs (ADAP), which provide medications to HIV-infected individuals. GAO was asked to discuss the distribution of funding under the CARE Act. This testimony presents preliminary findings on (1) the impact of CARE Act provisions that distribute funds based upon the number of AIDS cases in metropolitan areas, (2) the impact of CARE Act provisions that limit annual funding decreases, (3) the potential shifts in funding among grantees if HIV case counts were incorporated with the AIDS cases that are currently used in funding formulas, and (4) the variation in eligibility criteria and funding sources among state ADAPs. Under the CARE Act, GAO's preliminary findings show that the amount of funding per AIDS case varied among states and metropolitan areas in fiscal year 2004. Some CARE Act provisions that distribute funds based on the AIDS case count within metropolitan areas result in differing amounts of funding per case. In particular, when a state or territory has an EMA within its borders, the cases within that EMA are counted twice during the distribution of CARE Act funds--once to determine the EMA's funding under Title I, and once again to determine a state's Title II grant. The hold-harmless provisions under Titles I and II guarantee a certain percentage of a previous year's funding amount, thus sustaining the funding levels of CARE Act grantees based upon previous years' measurements of AIDS cases. Title I's hold-harmless provision for EMAs has primarily benefited the San Francisco EMA, which received over 90 percent of the fiscal year 2004 Title I hold-harmless funding. San Francisco alone continues to have deceased cases factored in to its allocation, because it is the only EMA with hold-harmless funding that dates back to the mid-1990s when formula funding was based on the cumulative count of diagnosed AIDS cases. If HIV case counts had been incorporated with AIDS cases in allocating Title II funding to the states in fiscal year 2004, about half of the states would have received an increase in funding and half of the states would have received less funding. Many of those states receiving increased funding would have been in the South, a region that includes 7 of the 10 states with the highest estimated rates of individuals living with HIV. However, wide variation in the maturity of states' HIV reporting systems could limit the adequacy of their HIV case counts for the distribution of CARE Act funding. Among state ADAPs, there is wide variation in the criteria used to determine who is eligible for ADAP medications and services, and in the additional funding received beyond the Title II grant for each state ADAP. States have flexibility to determine what drugs they will cover for their ADAP clients and what income level will entitle a person to eligibility, among other criteria, and the resulting variation can contribute to client coverage differences among state ADAPs. There is similar variation in additional funding sources and eligibility criteria among states that have established waiting lists for eligible clients. The Centers for Disease Control and Prevention and the Health Resources and Services Administration provided comments on the facts contained in this testimony and GAO made changes as appropriate.
Background The Coast Guard, which became a part of the Department of Homeland Security on March 1, 2003, has a wide variety of both security and nonsecurity missions. (See table 1.) The Coast Guard’s equipment includes 141 cutters, approximately 1,400 small patrol and rescue boats, and about 200 aircraft. Coast Guard services are provided in a variety of locations, including ports, coastal areas, the open sea, and in other waterways like the Great Lakes and the Mississippi River. The Coast Guard’s installations range from small boat stations providing search and rescue and other services to marine safety offices that coordinate security and other activities in the nation’s largest ports. As an organization that is also part of the armed services, the Coast Guard has both military and civilian positions. At the end of fiscal year 2002, the agency had over 42,000 full-time positions—about 36,000 military and about 6,600 civilians. The Coast Guard also has about 7,200 reservists who support the national military strategy and provide additional operational support and surge capacity during emergencies, such as natural disasters. In addition, about 36,000 volunteer auxiliary personnel assist in a wide range of activities from search and rescue to boating safety education. Overall, after using fiscal year 2003 inflation-adjusted dollars to adjust for the effects of inflation, the Coast Guard’s budget grew by about 41 percent between fiscal years 1993 and 2003. However, nearly all of this growth occurred in the second half of the period. During fiscal years 1993-1998, after taking inflation into account, the budget remained essentially flat. (See fig. 1.) Significant increases have occurred since fiscal year 1998. The events of September 11th caused the Coast Guard to direct its efforts increasingly into maritime homeland security activities, highlighted by the Coast Guard’s establishing a new program area: Ports, Waterways, and Coastal Security (coastal security). Prior to September 11th, activities related to this area represented less than 10 percent of the Coast Guard’s operating budget, according to Coast Guard officials. In the fiscal year 2004 request, Coastal Security represents about one-quarter of the Coast Guard’s operating budget. Other mission areas, most notably drug interdiction, have declined substantially as a percentage of the operating budget. Security Emphasis Continues to Affect Levels of Effort in Some Missions The emphasis the Coast Guard placed on security after September 11th has had varying effects on its level of effort among all of its missions, as measured by the extent to which multiple-mission resources (cutters, other boats, and aircraft) are used for a particular mission. The most current available data show that some security-related missions, such as migrant interdiction and coastal security, have grown significantly since September 11th. Other missions, such as search and rescue and aids to navigation remained at essentially the same levels as they were before September 11th. However, the level of effort for other missions, most notably the interdiction of illegal drugs and fisheries enforcement, is substantially below pre-September 11th levels. Missions with Increased Levels of Resources Missions such as ports, waterways, and coastal security, and migrant interdiction have experienced increased levels of effort. Coastal security has seen the most dramatic increase from pre-September 11th levels. (See fig. 2.) For example, it went from 2,400 resource hours during the first quarter of 1999, peaked at 91,000 hours during the first quarter of fiscal year 2002 (immediately after September 11, 2001), and most recently stood at nearly 37,000 hours for the first quarter of fiscal year 2003. In figure 2, as well as the other resource hour figures that follow, we have added a line developed by using a linear regression to show the general trend for the period. It is important to note that while such lines depict the trend in resource hours to date, they should not be taken as a prediction of future values. Other activity indicators, such as sea marshal boardings, also demonstrate an increased emphasis in this area. Before September 11th, such boardings were not done, but as of the first quarter of 2003 there have been over 550 such boardings. Similarly, vessel operational control actions have risen by 85 percent since the fourth quarter of fiscal year 2001. Given the emphasis on homeland security, it is not surprising that efforts to interdict illegal immigrants have also increased. For example, during the first quarter of 2003, the level of effort in this area was 28 percent higher than it was for the comparable period in 1998. Missions with a Steady State of Resources Some of the Coast Guard’s traditional missions, such as aids to navigation and search and rescue, have been the least affected by the increased emphasis on security. While resource hours for both of these missions have declined somewhat since the first quarter of fiscal year 1998, the overall pattern of resource use over the past 5 years has remained consistent. Although search and rescue boats and buoy tenders were used to perform homeland security functions immediately after September 11th, their doing so did not materially affect the Coast Guard’s ability to carry out its search and rescue or aids to navigation missions. Search and rescue boats were initially redeployed for harbor patrols after the terrorist attacks, but the impact on the mission was minimal because the deployments occurred during the off-season with respect to recreational boating. Similarly, some boats that normally serve as buoy tenders—an aids to navigation function—were used for security purposes instead, but they were among the first to be returned to their former missions. For the first quarter of fiscal year 2003, the number of resource hours spent on these missions was very close to the number spent during the comparable quarter of fiscal year 1998. Performance measurement data further demonstrates the relatively minimal impact on these missions resulting from the Coast Guard’s emphasis on homeland security. For example, for search and rescue, the Coast Guard was within about half a percentage point of meeting its target for saving mariners in distress in 2002 (84.4 percent actual, 85 percent goal). Likewise, data show that with respect to its aid to navigation mission, in 2002 the Coast Guard was about 1 percent from its goal of navigational aid availability (98.4 percent actual, 99.7 percent goal). Missions with a Decline in Resource Hours A number of missions have experienced declines in resource hours from pre-September 11th levels, including drug interdiction, fisheries enforcement (domestic and foreign), marine environmental protection, and marine safety. In particular, drug enforcement and fisheries enforcement have experienced significant declines. Compared with the first quarter of 1998, resource hours for the first quarter of fiscal year 2003 represent declines of 60 percent for drug interdiction and 38 percent for fisheries enforcement. (See fig. 4.) In fact, resource hours for these areas were declining even before the events of September 11th, and while they briefly rebounded in early 2002, they have since continued to decline. A Coast Guard official said the recent decline in both drug enforcement and fisheries can be attributed to the heightened security around July 4, 2002, and the anniversary of the September 11th terrorist attacks, as well as the deployment of resources for military operations. They said the decline will likely not be reversed during the second quarter of 2003 because of the diversion of Coast Guard cutters to the Middle East and the heightened security alert that occurred in February 2003. The reduction in resource hours over the last several years in drug enforcement is particularly telling. In the first quarter of 1998, the Coast Guard was expending nearly 34,000 resource hours on drug enforcement, and as of first quarter of 2003, the resource hours had declined to almost 14,000 hours—a reduction of nearly two-thirds. Also, both the number of boardings to identify illegal drugs and the amount of illegal drugs seized declined from the first quarter of fiscal year 2000. The Coast Guard’s goal of reducing the flow of illegal drugs based on the seizure rate for cocaine has not been met since 1999. During our conversations with Coast Guard officials, they explained that the Office of National Drug Control Policy (ONDCP) set this performance goal in 1997, and although they recognize they are obligated to meet these goals, they believe the goals should be revised. Our review of the Coast Guard’s activity levels in domestic fishing shows U.S. fishing vessel boardings and significant violations identified are both down since 2000. The Coast Guard interdicted only 19 percent as many foreign vessels as it did in 2000. The reduced level of effort dedicated to these two missions is likely linked to the Coast Guard’s inability to meet its performance goals in these two areas. For instance, in 2002 the Coast Guard did not meet its goal of detecting foreign fishing vessel incursions, and while there is no target for domestic fishing violations, there were fewer boardings and fewer violations in 2002 than in 2000. Recently, the Coast Guard Commandant stated that the Coast Guard intends to return law enforcement missions (drug interdiction, migrant interdiction, and fisheries enforcement) to 93 percent of pre-September 11th levels by the end of 2003 and 95 percent by the end of 2004. However, in the environment of heightened security and the continued deployment of resources to the Middle East, these goals will likely not be achieved, especially for drug interdiction and fisheries enforcement, which are currently far below previous activity levels. Fiscal Year 2004 Budget Request Will Not Substantially Alter Current Levels of Effort The Coast Guard’s budget request for fiscal year 2004 does not contain initiatives or proposals that would substantially alter the current distribution of levels of effort among mission areas. The request for $6.8 billion represents an increase of about $592 million, or about 9.6 percent in nominal dollars, over the enacted budget for fiscal year 2003. The majority of the increase covers pay increases for current or retired employees or continues certain programs already under way, such as upgrades to information technology. About $168.5 million of the increase would fund new initiatives, most of which relate either to homeland security or to search and rescue. Another $20.8 million of the increase is for the capital acquisitions request, which totals $797 million. The capital acquisition request focuses mainly on two projects—the Deepwater Project for replacing or upgrading cutters, patrol boats, and aircraft, and the congressionally mandated modernization of the maritime distress and response system. Operating Expenses Would Increase by $440 Million About $440 million of the $592 million requested increase is for operating expenses for the Coast Guard’s mission areas. This requested increase in operating expenses is 10 percent higher than the amount for operating expenses in the enacted budget for fiscal year 2003. The requested increase is made up of the following: pay increases and military personnel entitlements: $162.5 million; funding of continuing programs and technical adjustments: $81 million; (These are multiyear programs that the Coast Guard began in previous years. Examples include continuing development of information technology projects and operating new shore facilities started with funds from previous budgets. Technical adjustments provide for the annualization of expenditures that received only partial-year funding in the prior fiscal year.) Reserve training: $28 million; and new initiatives: $168.5 million. (These initiatives are described in more detail below.) New Initiatives Relate Primarily to Search and Rescue and Homeland Security The Coast Guard’s budget request includes three new initiatives—one for search and rescue and two for homeland security. (See table 2.) As such, these initiatives do not represent substantial shifts in current levels of effort among missions. The search and rescue initiative is part of a multiyear effort to address shortcomings in search and rescue stations and command centers. In September 2001, the Department of Transportation Office of the Inspector General reported that readiness at search and rescue stations was deteriorating. For example, staff shortages at most stations required crews to work an average of 84 hours per week, well above the standard (68 hours) established to limit fatigue and stress among personnel. The initiative seeks to provide appropriate staffing and training to meet the standards of a 12-hour watch and a 68-hour work week. The Congress appropriated $14.5 million in fiscal year 2002 and $21.7 million in fiscal year 2003 for this initiative. The amount requested for fiscal year 2004 ($26.3 million) would pay for an additional 390 full- time search and rescue station personnel and for 28 additional instructors at the Coast Guard’s motor lifeboat and boatswain’s mate schools. Coast Guard officials said the two initiatives designed mainly for homeland security purposes would help the Coast Guard in other mission areas as well. For example, the information-sharing effort under maritime domain awareness is designed to improve communications between cutters and land stations. It also pays for equipping cutters with the universal automated identification system, which allows the Coast Guard to monitor traffic in its vicinity, including the vessel name, cargo, and speed. These capabilities are important not only for homeland security missions, but also for law enforcement and search and rescue, according to Coast Guard officials. Likewise, the units being added as part of the homeland security operations initiative will focus primarily on security issues but will also serve other missions, according to Coast Guard officials. For example, the new stations that would be established in Washington and Boston would be involved in search and rescue, law enforcement, and marine environmental protection. Capital Acquisition Budget Focuses on Two Main Projects The capital acquisition budget request for fiscal year 2004 is $797 million, an increase of $20.8 million in nominal dollars over fiscal year 2003. The majority of the request will go to fund two projects—the Integrated Deepwater System and the Coast Guard’s maritime distress and response system, called Rescue 21. Other acquisitions include new response boats to replace 41-foot utility boats which serve multiple missions, more coastal patrol boats, as well as a replacement icebreaker for the Great Lakes. (See table 3.) At $500 million, the Deepwater Project accounts for about 63 percent of the amount requested for capital acquisitions. This project is a long-term (20 to 30 years) integrated approach to upgrading cutters, patrol boats, and aircraft as well as providing better links between air, shore, and surface assets. When the system is fully operational, it will make the Coast Guard more effective in all of its missions, particularly law enforcement where deepwater cutters and aircraft are key to carrying out critical functions such as drug and migrant interdiction and fisheries enforcement. Rescue 21, the second major program, provides for the modernization of the command, control, and communication infrastructure of the national distress and response system. The current system suffers from aging equipment, limited spare parts, and limited interoperability with other agencies. Of particular concern to the Coast Guard and the maritime community are the current system’s coverage gaps, which can result in missed maritime distress calls. The Congress has mandated that this system be completed by the end of fiscal year 2006. The $134 million request for fiscal year 2004 would keep the project on schedule, according to Coast Guard officials. Significant Challenges Raise Concerns About Coast Guard’s Ability to Accomplish Its Diverse Missions Despite the billion-dollar (19 percent) budget increase it has received over the past 2 years, the Coast Guard faces fundamental challenges in attempting to accomplish everything that has come to be expected of it. We have already described how the Coast Guard has not been able, in its current environment, to both assimilate its new homeland security responsibilities and restore other missions, such as enforcement of laws and treaties, to levels that are more reflective of past years. The fiscal year 2004 budget request does not provide substantial new funding to change these capabilities, except for homeland security and search and rescue. In addition, several other challenges further threaten the Coast Guard’s ability to balance these many missions. The first is directly tied to funding for the Deepwater Project. The project has already experienced delays in delivery of key assets and could face additional delays if future funding falls behind what the Coast Guard had planned. Such delays could also seriously jeopardize the Coast Guard’s ability to carry out a number of security and nonsecurity missions. Similarly, for the foreseeable future, the Coast Guard must absorb a variety of new mandated homeland security tasks by taking resources from existing activities. To the extent that these responsibilities consume resources that would normally go elsewhere, other missions will be affected. Finally, in its new environment, the Coast Guard faces the constant possibility that terror alerts, terrorist attacks, or military actions would require it to shift additional resources to homeland security missions. Such challenges raise serious concerns about the Coast Guard’s ability to be “all things to all people” to the degree that the Coast Guard, the Congress, and the public desire. In past work, we have pointed to several steps that the Coast Guard needs to take in such an environment. These include continuing to address opportunities for operational efficiency, especially through more partnering; developing a comprehensive strategy for balancing resource use across all of its missions; and developing a framework for monitoring levels of effort and measuring performance in achieving mission goals. The Coast Guard has begun some work in these areas; however, addressing these challenges is likely to be a longer-term endeavor, and the success of the outcome is not clear. Continued Funding Shortfalls Could Delay the Deepwater Project and Adversely Affect the Coast Guard’s Mission Capabilities Under current funding plans, the Coast Guard faces significant potential delays and cost increases in its $17 billion Integrated Deepwater Project. This project is designed to modernize the Coast Guard’s entire fleet of cutters, patrol boats, and aircraft over a 20-year period. Given the way the Coast Guard elected to carry out this project, its success is heavily dependent on receiving full funding every year. So far, that funding has not materialized as planned. Delays in the project, which have already occurred, could jeopardize the Coast Guard’s future ability to effectively and efficiently carry out its missions, and its law enforcement activities— that is, drug and migrant interdiction and fisheries enforcement—would likely be affected the most, since they involve extensive use of deepwater cutters and aircraft. Under the project’s contracting approach, the responsibility for Deepwater’s success lies with a single systems integrator and its contractors for a period of 20 years or more. Under this approach, the Coast Guard has started on a course potentially expensive to alter. It is based on having a steady, predictable funding stream of $500 million in 1998 dollars over the next 2 to 3 decades. Already the funding provided for the project is less than the amount the Coast Guard planned for. The fiscal year 2002 appropriation for the project was about $28 million below the planned level, and the fiscal year 2003 appropriated level was about $90 million below the planning estimate. And even the President’s fiscal year 2004 budget request for the Coast Guard is not consistent with the Coast Guard’s deepwater funding plan. If the requested amount of $500 million for fiscal year 2004 is appropriated, this would represent another shortfall of $83 million, making the cumulative shortfall about $202 million in the project’s first 3 years, according to Coast Guard data. If appropriations hold steady at $500 million (in nominal dollars) through fiscal year 2008, the Coast Guard estimates that the cumulative shortfall will reach $626 million. The shortfalls in the last 2 fiscal years (2002 and 2003) and their potential persistence could have serious consequences. The main impact is that it would take longer and cost more in the long run to fully implement the deepwater system. For example, due to funding shortfalls experienced to date, the Coast Guard has delayed the introduction of the Maritime Patrol Aircraft by 19 months and slowed the conversion and upgrade program for the 110-foot Patrol Boats. According to the Coast Guard, if the agency continues to receive funding at levels less than planned, new asset introductions—and the associated retirement of costly, less capable Coast Guard legacy assets—will continue to be deferred. The cost of these delays will be exacerbated by the accompanying need to invest additional funds in maintaining current assets beyond their planned retirement date because of the delayed introduction of replacement capabilities and assets, according to the Coast Guard. For example, delaying the Maritime Patrol Aircraft will likely require some level of incremental investment to continue safe operation of the current HU-25 jet aircraft. Similarly, a significant delay in the scheduled replacement for the 270-foot Medium Endurance Cutter fleet could require an unplanned and expensive renovation for this fleet. System performance—and the Coast Guard’s capability to effectively carry out its mission responsibilities—would also likely be impacted if funding does not keep pace with planning estimates. For example, Coast Guard officials told us that conversions and upgrades for the 110-foot Patrol Boat would extend its operating hours from about 1,800 to 2,500 per year. Once accomplished, this would extend the time these boats could devote to both security and nonsecurity missions. Given the funding levels for the project, these conversions and upgrades have been slowed. Coast Guard officials also said that with significant, continuing funding shortfalls delaying new asset introductions, at some point, the Coast Guard would be forced to retire some cutters and aircraft—even as demand for those assets continues to grow. For example, in 2002, two major cutters and several aircraft were decommissioned ahead of schedule due to their deteriorated condition and high maintenance costs. Some New Homeland Security Duties Are Not Fully Factored into the Coast Guard’s Distribution of Resources A second challenge is that the Coast Guard has been tasked with a myriad of new homeland security requirements since the fiscal year 2004 budget request was formulated and will have to meet many of these requirements by pulling resources from other activities. Under the Maritime Transportation Security Act (MTSA), signed into law in November 2002, the Coast Guard must accomplish a number of security-related tasks within a matter of months and sustain them over the long term. MTSA requires the Coast Guard to be the lead agency in conducting security assessments, developing plans, and enforcing specific security measures for ports, vessels, and facilities. In the near term, the Coast Guard must prepare detailed vulnerability assessments of vessels and facilities it identifies to be at high risk of terrorist attack. It must also prepare a National Maritime Transportation Security Plan that assigns duties among federal departments and agencies and specifies coordination with state and local officials—an activity that will require substantial work by Coast Guard officials at the port level. The Coast Guard must also establish plans for responding to security incidents, including notifying and coordinating with local, state, and federal authorities. Because the fiscal year 2004 budget request was prepared before MTSA was enacted, it does not specifically devote funding to most of these port security responsibilities. Coast Guard officials said that they will have to absorb costs related to developing, reviewing, and approving plans, including the costs of training staff to monitor compliance, within their general budget. Coast Guard officials expect that the fiscal year 2005 budget request will contain funding to address all MTSA requirements; in the meantime, officials said that the Coast Guard would have to perform most of its new port security duties without additional appropriation, and that the funds for these duties would come from its current operations budget. The costs of these new responsibilities, as well as the extent to which they will affect resources for other missions, are not known. External Uncertainties Place Additional Strain on Resources Security alerts, as well as actions needed in the event of an actual terrorist attack, can also affect the extent to which the Coast Guard can devote resources to missions not directly related to homeland security. Coast Guard officials told us that in the days around September 11, 2002, when the Office of Homeland Security raised the national threat level from “elevated” to “high” risk, the Coast Guard reassigned cutters and patrol boats in response. In February 2003, when the Office of Homeland Security again raised the national threat level to “high risk,” the Coast Guard repositioned some of its assets involved in offshore law enforcement missions, using aircraft patrols in place of some cutters that were redeployed to respond to security-related needs elsewhere. While these responses testify to the tremendous flexibility of a multi-mission agency, they also highlight what we found in our analysis of activity-level trends—when the Coast Guard responds to immediate security needs, fewer resources are available for other missions. The Coast Guard’s involvement in the military buildup for Operation Enduring Freedom in the Middle East further illustrates how such contingencies can affect the availability of resources for other missions. As part of the buildup, the Coast Guard has deployed eight 110-foot boats, two high-endurance cutters, four port security units, and one buoy tender to the Persian Gulf. These resources have come from seven different Coast Guard Districts. For example, officials from the First District told us they sent four 110-foot patrol boats and three crews to the Middle East. These boats are multi-mission assets used for fisheries and law enforcement, search and rescue, and homeland security operations. In their absence, officials reported, the First District is more flexibly using other boats previously devoted to other tasks. For instance, buoy tenders have taken on some search and rescue functions, and buoy tenders and harbor tug/icebreakers are escorting high-interest vessels. Officials told us that these assets do not have capabilities equivalent to the patrol boats but have been able to perform the assigned mission responsibilities to date. Several Types of Actions Needed to Address Challenges In previous work, we have examined some of the implications of the Coast Guard’s new operating environment on the agency’s ability to fulfill its various missions. This work, like our testimony today, has pointed to the difficulty the Coast Guard faces in devoting additional resources to nonsecurity missions, despite the additional funding and personnel the agency has received. In particular, we have suggested that the following actions need to be taken as a more candid acknowledgement of the difficulty involved: Opportunities for increased operational efficiency need to be explored. Over the past decade, we and other outside organizations, along with the Coast Guard itself, have studied Coast Guard operations to determine where greater efficiencies might be found. These studies have produced a number of recommendations, such as shifting some responsibilities to other agencies. One particular area that has come to the forefront since September 11th is the Coast Guard’s potential ability to partner with other port stakeholders to help accomplish various security and nonsecurity activities involved in port operations. Some effective partnerships have been established, but the overall effort has been affected by variations in local stakeholder networks and limited information-sharing among ports. A comprehensive blueprint is needed for setting and assessing levels of effort and mission performance. One important effort that has received relatively little attention, in the understandable need to first put increased homeland security responsibilities in place, is the development of a plan that proactively addresses how the Coast Guard should manage its various missions in light of its new operating reality. The Coast Guard’s adjustment to its new post-September 11th environment is still largely in process, and sorting out how traditional missions will be fully carried out alongside new security responsibilities will likely take several years. But it is important to complete this plan and address in it key elements and issues so that it is both comprehensive and useful to decision makers who must make difficult policy and budget choices. Without such a blueprint, the Coast Guard also runs the risk of continuing to communicate that it will try to be “all things to all people” when, in fact, it has little chance of actually being able to do so. The Coast Guard has acknowledged the need to pursue such a planning effort, and the Congress has directed it to do so. Coast Guard officials told us that as part of the agency’s transition to the Department of Homeland Security, they are updating the agency’s strategic plan, including plans to distribute all resources in a way that can sustain a return to previous levels of effort for traditional missions. In addition, the Congress placed a requirement in MTSA for the Coast Guard to submit a report identifying mission targets, and steps to achieve them, for all Coast Guard missions for fiscal years 2003-2005. However, this mandate is not specific about the elements that the Coast Guard should address in the report. To be meaningful, this mandate should be addressed with thoroughness and rigor and in a manner consistent with our recent recommendations— it requires a comprehensive blueprint that embodies the key steps and critical practices of performance management. Specifically, in our November 2002 report on the progress made by the Coast Guard in restoring activity levels for its key missions, we recommended an approach consisting of a long-term strategy outlining how the Coast Guard sees its resources—cutters, boats, aircraft, and personnel—being distributed across its various missions, a time frame for achieving this desired balance, and reports with sufficient information to keep the Congress apprised not only of how resources were being used, but what was being accomplished. The Coast Guard agreed that a comprehensive strategy was needed, and believes that they are beginning the process to develop one. Table 4 provides greater explanation of what this approach or blueprint would entail. The events of recent months heighten the need for such an approach. During this time, the budgetary outlook has continued to worsen, emphasizing the need to look carefully at the results being produced by the nation’s large investment in homeland security. The Coast Guard must be fully accountable for investments in its homeland security missions and able to demonstrate what these security expenditures are buying and their value to the nation. At the same time, recent events also demonstrate the extent to which highly unpredictable homeland security events, such as heightened security alerts, continue to influence the amount of resources available for performing other missions. The Coast Guard needs a plan that will help the agency, the Congress, and the public understand and effectively deal with trade-offs and their potential impacts in such circumstances. Madame Chair, this concludes my testimony today. I would be pleased to respond to any questions that you or Members of the Subcommittee may have at this time. Contacts and Acknowledgments For information about this testimony, please contact JayEtta Z. Hecker, Director, Physical Infrastructure, at (202) 512-2834, or heckerj@gao.gov, or Margaret T. Wrightson, at (415) 904-2200, or wrightsonm@gao.gov. Individuals making key contributions to this testimony include Steven Calvo, Christopher M. Jones, Sharon Silas, Stan Stenersen, Eric Wenner, and Randall Williamson.
The September 11th attacks decidedly changed the Coast Guard's priorities and markedly increased its scope of activities. Homeland security, a long-standing but relatively small part of the Coast Guard's duties, took center stage. Still, the Coast Guard remains responsible for many other missions important to the nation's interests, such as helping stem the flow of drugs and illegal migration, protecting important fishing grounds, and responding to marine pollution. For the past several years, the Coast Guard has received substantial increases in its budget to accommodate its increased responsibilities. GAO was asked to review the Coast Guard's most recent level of effort on its various missions and compare them to past levels, analyze the implications of the proposed 2004 budget for these levels of effort, and discuss the challenges the Coast Guard faces in balancing and maximizing the effectiveness of all its missions. The most recent levels of effort for the Coast Guard's various missions show clearly the dramatic shifts that have occurred among its missions since the September 11th attacks. Predictably, levels of effort related to homeland security remain at much higher levels than before September 11th. Levels of effort for two major nonsecurity missions--search and rescue and aids to navigational--are now relatively consistent with historical levels. By contrast, several other missions--most notably fisheries enforcement and drug interdiction--dropped sharply after September 11th and remain substantially below historical levels. Although the Coast Guard has stated that its aim is to increase efforts in the missions that have declined, continued homeland security and defense demands make it unlikely that the agency, in the short run, can deliver on this goal. The 2004 budget request contains little that would appear to substantially alter the existing levels of effort among missions. The initiatives in the proposed budget relate mainly to enhancing homeland security and search and rescue missions. Although the 2004 budget request represents a sizeable increase in funding (9.6 percent), the Coast Guard still faces fundamental challenges in meeting its new security-related responsibilities while rebuilding its capacity to accomplish other missions that have declined. Given the likely constraints on the federal budget in future years, it is important for the Coast Guard to identify the likely level of effort for each of its missions; lay out a plan for achieving these levels; and tie these levels to measurable outputs and goals, so that the agency and the Congress can better decide how limited dollars should be spent.
Rationale for Creating DFAS Before 1991, the military services maintained separate finance and accounting operations that were duplicative and inefficient. DFAS was created to standardize DOD finance and accounting policies, procedures, and systems. Military services and defense agencies generally use operations and maintenance appropriations to pay for DFAS services. Before fiscal year 1991, the military services and defense agencies each had their own financial management structure, consisting of a headquarters comptroller organization; finance and accounting centers; and accounting, finance, and disbursing offices at military bases. Each service and agency developed its own processes and systems that were geared to its particular mission. In many instances, the military services and defense agencies interpreted governmentwide and DOD-level finance and accounting policies differently. According to DOD, these variances sometimes resulted in managers being provided conflicting information. Over the years as greater emphasis was placed on joint operations, financial management system incompatibility and lack of standardization (even within a military service) became more apparent. For example, there was only one pay schedule for military personnel, yet DOD maintained and operated dozens of different pay systems. These types of conditions produced business practices that were complex, slow, and error prone. According to DOD officials, no matter how skilled the people operating them, DOD’s financial management systems and processes were inherently handicapped in their efficiency and effectiveness. Furthermore, DOD officials stated that there was an inherent inefficiency in having multiple organizations perform virtually identical functions. Given these problems; changes in the economic, political, and management environments; and advances in technology, DOD officials became convinced they needed to improve the economy and efficiency of their finance and accounting operations. After assessing how finance and accounting activities were performed, DOD determined that consolidating these activities offered a number of potential advantages, including increasing DOD-wide oversight; improving consistency in the application of accounting principles, policies, procedures, systems, and standards throughout DOD; eliminating the costs of maintaining and operating multiple financial operations and systems; improving decision making by providing DOD managers with more timely, meaningful, and accurate financial information; and accelerating the implementation of standard DOD-wide financial systems. The establishment of DFAS in January 1991 was the first step taken by DOD directed at fundamentally reforming finance and accounting operations. DFAS was formed by consolidating into a single agency under DOD’s Comptroller, the large finance and accounting centers that belonged to the military services and the Defense Logistics Agency. Recognizing that additional economies and efficiencies could be achieved, the Deputy Secretary of Defense, in December 1991, directed DFAS to assume control of existing finance and accounting operations and personnel at the command and installation levels within the military services. By 1994, DFAS had assumed responsibility for many of the finance and accounting activities at 332 offices (in the continental United States, Alaska, Hawaii, Guam, Puerto Rico, and Panama) and had announced plans to consolidate these activities at a limited number of DFAS locations. To focus DOD management’s attention on managing the cost of finance and accounting activities, DFAS was designated a Defense Business Operations Fund (DBOF) business area in fiscal year 1992. The concept of DBOF is to promote total cost visibility by charging customers (primarily the military services and defense agencies) for the full cost of providing goods and services. By doing this, DOD hoped that all levels of management would focus their attention on the total costs of carrying out certain critical DOD business operations. DOD anticipated that this would encourage managers to become more conscious of operating costs and make fundamental improvements in how DOD conducts business. In fulfilling DBOF’s concept, DFAS sets the prices it charges the military services and defense agencies and bills them to cover the full cost of its operations. The military services and defense agencies pay for these services primarily with funds from their operations and maintenance appropriations. The 1997 Defense Authorization Act required DOD to conduct a comprehensive study of DBOF and present an improvement plan to the Congress for approval. Pending the results of this study, DOD’s Comptroller, on December 11, 1996, dissolved DBOF and created four working capital funds: (1) Army Working Capital Fund, (2) Navy Working Capital Fund, (3) Air Force Working Capital Fund, and (4) Defense-wide Working Capital Fund. DFAS is part of the Defense-wide Working Capital Fund. The four working capital funds will continue to operate under the revolving fund concept—using the same policies, procedures, and systems as they did under DBOF—and charge customers the full costs of providing goods and services to them. Changes in DOD’s Finance and Accounting Infrastructure Over the past few years, DOD’s finance and accounting organization and management structure has undergone major changes. For example, DFAS and the military services now share the finance and accounting responsibilities that previously belonged to the military services. Most significantly, however, DFAS has developed a new concept of operations that involves performing most of its finance and accounting operations at consolidated sites rather than at local bases and installations. This has allowed it to reduce the number of locations and personnel needed to perform these operations and to begin standardizing its accounting systems and processes. This section describes the current organizational structure of DOD’s finance and accounting activities and the status of various changes with respect to finance and accounting locations, personnel, budgets, and systems. DFAS and the Military Services Share Finance and Accounting Responsibilities DFAS and the military services are jointly responsible for carrying out DOD finance and accounting activities. DFAS negotiated a division of responsibility with each military service. Finance and accounting operations are performed by two chains of command within DOD. On one side is DFAS, which reports to the Under Secretary of Defense Comptroller/Chief Financial Officer within the Office of the Secretary of Defense. On the other side are the military services, which are headed by their respective secretary. Each service secretary has an assistant secretary for financial management who directs and manages financial management activities consistent with policies prescribed by the Chief Financial Officer and the service’s implementing directives. As shown in figure 1, the Under Secretary has no direct line of authority to any of the financial management staff within the military services, defense agencies, and DOD field activities. Those staff report through their own organizational structure to their respective unit heads. The Under Secretary and the unit heads report to the Secretary of Defense. The Under Secretary, however, does issue policies, instructions, regulations, and procedures relating to financial management matters and the production of financial statements, which are binding on all DOD activities. The National Defense Authorization Act for Fiscal Year 1994 designated the Comptroller as DOD’s Chief Financial Officer. Specific duties of the Comptroller/Chief Financial Officer as specified in the Chief Financial Officers Act include directing, managing, and providing policy guidance and oversight of agency financial management personnel, activities, and operations; developing and maintaining integrated accounting and financial monitoring the financial execution of the agency budgets in relation to actual expenditures and preparing and submitting timely performance reports; and overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. As mentioned, each service secretary has an assistant secretary for financial management who reports to the service secretary and directs and manages financial management activities consistent with policies prescribed by the Chief Financial Officer and the service’s implementing directives. The assistant secretary for financial management position in each service was established in the National Defense Authorization Act for Fiscal Year 1989. The act delineated many of the responsibilities of the office, including managing financial management activities and operations; directing the preparation of budget estimates; approving any asset management systems, including cash and credit collecting debts; and accounting for property and inventory systems. Because of potentially overlapping responsibilities, DFAS met several times with the military services’ financial managers and their staffs during 1994 to reach agreement on their respective finance and accounting roles. These meetings resulted in “responsibility matrices” that identify the specific activities that will be performed by DFAS and each military service. According to DFAS, the responsibility matrix agreements were driven, to a large extent, by the number of finance and accounting personnel each service had transferred to DFAS. Prior to the negotiations in 1994, for example, the Army had transferred about 75 percent of its finance and accounting people to DFAS. According to Army officials, it kept only a small contingent of managerial accountants at each installation and major command location to interpret accounting reports provided by DFAS to the installation or major command and provide advice to the commander on proper stewardship of public funds. As a result, DFAS and the Army agreed that DFAS would perform just about all of the Army’s financial activities. On the other hand, Air Force and Navy officials stated that they transferred smaller percentages of their staffs (50 and 29 percent, respectively). They took this approach to maintain control of activities they felt were essential to providing service to their military personnel and families, such as computing travel pay or helping uniformed personnel solve pay-related problems. Travel payment, a finance function, is an example where DFAS provides different levels of service to its military customers. In this case, authorization, computation, disbursement, and accounting are performed by either the military services or DFAS. Table 3 identifies the responsible party for each of these steps. DFAS Is Consolidating Its Activities DFAS assumed control over the military services' finance centers and some of the activities at 332 military installations. DFAS is currently consolidating all its activities into 5 centers and not more than 21 operating locations. The military services continue to perform their remaining activities at most of the 332 installations. When DFAS was established, it opened a headquarters office in Arlington, Virginia, and assumed management control over the six large finance centers that belonged to the military services and defense agencies. One of these centers was subsequently closed, but the others continue to support the military service or defense agency they supported prior to the formation of DFAS. According to the Director of DFAS, this was done primarily to ensure that support levels to the military services and defense agencies remained at an acceptable level. DFAS also assumed control over many of the people and functions at 332 small finance and accounting offices around the world. To improve operational efficiencies and reduce costs, DFAS has focused a great deal of attention on consolidating the personnel and workload at a small number of locations. In May 1994, for example, the Deputy Secretary of Defense announced plans to move the DFAS workload and many of the people at these 332 locations to either the existing 5 centers or 20 new operating locations. As of September 1996, DFAS had closed 230 (or about 70 percent) of the small accounting offices and opened 17 operating locations. Figure 2 shows the number of finance and accounting offices that DFAS plans to close through fiscal year 1998, when the consolidation is now expected to be completed. Announced for fiscal year 1997 Three of the planned operating locations—Lexington, Kentucky; Newark, Ohio; and Rantoul, Illinois—have not been formally scheduled for opening at this time. The fourth planned operating location, at Memphis, Tennessee, will be under the cognizance of the U.S. Army Corps of Engineers until the Corps completes its consolidation of finance and accounting operations around fiscal year 1999. At that time, the Corps will transfer the activity to DFAS. Except for Honolulu, Hawaii; Norfolk, Virginia; Orlando, Florida; and San Antonio, Texas, each operating location provides services to a single military service. Honolulu serves all of the military services; Norfolk serves Navy and Army customers; and both Orlando and San Antonio serve Army and Air Force customers. In addition, Charleston, South Carolina; Pensacola, Florida; and Omaha, Nebraska, provide civilian pay service to all military services and defense agencies. Figure 3 shows the locations of the 5 centers and 21 existing or planned operating locations as of September 30, 1996. The primary customer (military service or defense agency) of each center is shown in parentheses in the figure. Not opened as of September 30, 1996. As discussed in the previous section, each of the military services retained certain functions (e.g., managerial accounting, travel claim computation, and customer service) in order to support local commanders and customers. To do this, the services have maintained some staff at most of the 332 installation-level finance offices. Although there are interfaces and exchanges of information between the staff at these offices and DFAS, organizationally they are not part of DOD’s Comptroller or DFAS’ communities. Rather, they report to and receive budgetary support from the base or installation commander. Civilian and military personnel at these activities are paid from operations and maintenance and military personnel appropriations, respectively. Number of People Performing Finance and Accounting Activities Is Not Tracked DOD estimated it had 46,000 people performing finance and accounting activities in 1994 and has 40,800 performing these today. 28,000 people were transferred into DFAS, leaving the military services with 18,000 people. DFAS currently has 23,500 employees. The military services do not track the number of finance and accounting personnel they employ, but estimate there are about 17,300. In May 1994, when the Deputy Secretary of Defense announced plans to consolidate finance and accounting operations, he said that the number of people performing these activities should drop from about 46,000 to 23,000 by 1999. As of September 1996, DOD estimates show that there were about 40,800 people performing finance and accounting activities—about 5,200 less than estimated in 1994. However, there is some uncertainty about these numbers primarily because the military services do not centrally budget for or manage finance and accounting operations. As a DBOF entity that is now part of the new Defense-wide Working Capital Fund, DFAS tracks the number of personnel it employs so that it can accurately charge its customers for the full cost of operations. Therefore, it generally knows how many people it inherited from the military services and its current on-board strength. DFAS officials told us, for example, that by 1994 DFAS had assumed control of 28,000 personnel—about 10,000 at the 5 large finance centers and about 18,000 at the 332 small, installation-level finance and accounting offices. As of September 1996, this workforce had been reduced to 23,500 and DFAS has plans to eliminate another 3,500 positions by the year 2000. According to DOD, most of these reductions are (or will be) made possible by economies of scale achieved by closing the 332 small finance and accounting offices and consolidating activities at the 5 centers and 21 operating locations. Finance and accounting personnel and activities in the military services, however, are budgeted for and controlled at the installation level. Consequently, service representatives said there were no specific plans to centrally assess or reduce the size of their finance and accounting network. For this reason, they were also uncertain of the number of people that remained after DFAS assumed control of resources in 1994 or that are currently onboard. According to DOD, however, there should have been about 18,000 finance and accounting personnel left with the military services in 1994. In 1992, DFAS and the military services issued a data call to all installation-level finance offices, and in 1994, estimated that the total number of people in DOD’s network was about 46,000. On the basis of this estimate, DFAS assumed control of 28,000 people, leaving about 18,000 people in the military services. To determine the number of people in the current military service network, the services (at our request) either issued another data call to their installations or prepared an estimate based on other available information. They reported to us that, as of September 30, 1996, approximately 17,300 people were performing finance and accounting activities in the military services. On the basis of a comparison of the original data call and the current estimate, about 700 fewer people are performing finance and accounting activities now than DOD officials believe were doing so when DFAS completed its transfer process in 1994. Figure 4 shows the number of finance and accounting personnel reported to us by DFAS and the military services as of September 30, 1996. This includes 589 personnel in the Marine Corps. Budget to Perform Finance and Accounting Activities Exceeds $2 Billion The total budget for DOD finance and accounting activities is unknown but exceeds $2 billion. DFAS' 1996 budget was $1.64 billion. The military services estimate their personnel costs for fiscal year 1996 at $598 million. The vast majority of the funds come from operations and maintenance appropriations. Information that was provided by DFAS and the military services indicates that DOD budgeted at least $2 billion in fiscal year 1996 to support finance and accounting activities. This estimate includes all DFAS costs plus estimated personnel costs in the military services. Because military service finance and accounting activities are budgeted at local installations and bases in various appropriation accounts, the military services were unable to estimate other finance and accounting-related costs such as training, equipment, supplies, and overhead. As part of the new Defense-wide Working Capital Fund, DFAS does not receive an appropriation. Instead, it bills customers, primarily the military services, for the cost of operations. These bills include charges for direct labor costs related to the performance of finance and accounting functions; indirect costs, such as systems support and depreciation expenses; and overhead costs, such as management support and electricity bills. The bills may also include additional charges or reductions to make up for prior year losses or gains. The military services use their operations and maintenance appropriations to pay the bills. Figure 5 shows DFAS’ financial operations budget from fiscal years 1991 through 1996 and the projected budget for fiscal years 1997 through 2000—the numbers are in constant 1996 dollars. As shown in figure 5, DFAS’ budget for finance and accounting increased from $339 million (in 1996 dollars) in fiscal year 1991 to about $1.64 billion in fiscal year 1996, primarily as a result of an increase in its scope of operations. In fiscal year 1991, for example, DFAS was in operation for only 9 months and was only supporting the finance centers. In fiscal year 1992, DFAS became a DBOF entity and began to identify and charge the military services for the full cost of its operations. For example, system support (e.g., computer hardware and software) costs that had been part of the Defense Information Systems Agency budget in the past were included in the DFAS budget. In fiscal year 1993, DFAS began to assume control of the 332 installation-level finance and accounting offices, and in 1994, DFAS began renovating buildings at the new operating locations. Between fiscal years 1996 and 2000, DFAS estimates its budget will decrease by about 10 percent—from $1.64 billion in fiscal year 1996 to $1.47 billion in 2000 in constant 1996 dollars. According to DFAS officials, the decrease reflects a leveling off of depreciation expenses associated with capital expenditures (such as new computer systems), a drop in workload as DOD continues to downsize its military force structure, and the completion of personnel and workload consolidations from the small finance and accounting offices to DFAS centers and operating locations. The military services’ finance and accounting activities are funded through annual operation and maintenance appropriations. Because these appropriations are allocated to many different budget categories at the installation level, military service officials were not able to estimate the total amount budgeted to support their finance and accounting activities. On the basis of the estimated number of personnel that are currently performing finance and accounting activities, the services estimated that for fiscal year 1996 they budgeted about $598 million in personnel costs. Figure 6 shows the personnel costs each of the military services estimated it incurred during fiscal year 1996. DFAS Is Reducing the Number of Finance and Accounting Systems DFAS is responsible for reducing the number of finance and accounting systems used throughout DOD. Since 1991, the number of DOD's reported finance and accounting systems has been reduced from 324 to 217. The military services continue to operate hundreds of feeder systems for which DFAS has no responsibility. As part of its mission, DFAS is responsible for standardizing the finance and accounting systems used throughout DOD. When it was established, for example, DFAS reported that it inherited 127 finance and 197 accounting systems that were in use throughout DOD. In general, DOD defines finance systems as those used to process payments to DOD personnel, retirees, annuitants, and contractors, and accounting systems as those relied on to track appropriations and record operating and capital expenses. In accordance with DOD Financial Management Regulations (DOD 7000.14-R, Volume 1), DFAS, however, does not recognize or include in its inventory several hundred “feeder systems”—systems used to initially record financial data, such as logistics, inventory, and personnel systems—as finance and accounting systems. Yet these feeder systems, which are under the control and operations of the military services and defense agencies, are the source of much of the information that is needed to adequately account for DOD’s assets and operations. DFAS embarked on what it calls a migration system strategy to reduce the number of DFAS finance and accounting systems. Under this strategy, which is depicted in figure 7, DFAS plans to gradually reduce the number of systems used in each functional area (e.g., civilian payroll, military payroll, and accounting) until it eventually arrives at systems that would be used DOD-wide for each finance and accounting area. While the completion of this strategy varies by system and functional area, DFAS estimates that about 49 percent of its current systems (107 of 217) will be eliminated by 2000. . . . . . . This migration strategy typically involves (1) selecting one of the legacy systems from each service, (2) implementing the system servicewide, (3) selecting the best interim migratory system to be DOD’s standard migratory system, and (4) enhancing the migratory system until it meets all DOD requirements. As shown in table 4, DFAS has reduced the reported number of finance systems from 127 to 67 (a 47-percent reduction) and accounting systems from 197 to 150 (a 24-percent reduction). By the year 2000, DFAS estimates that the number of systems will be further reduced to 110—43 finance and 67 accounting systems. Table 4 also shows the number of finance and accounting locations where these systems were used as of September 30, 1996. On the basis of the information presented in table 4, DFAS has been successful in reducing the number of systems in several areas, particularly those where the military services had already consolidated activities at a small number of locations. When DFAS was formed, for example, each of the military services was already operating standard retiree and annuitant pay systems at its respective finance centers. After evaluating the relative capabilities of these systems, DFAS selected the Navy’s retiree pay system and the Air Force’s annuitant pay system as DOD-wide migratory systems. DFAS subsequently integrated these two systems into one system and pays all retirees from the Cleveland center and all annuitants from the Denver center. DOD Finance and Accounting Activities DFAS and the military services account for monies from four primary sources. Finance and accounting operations are divided into nine functional areas. DOD’s $240-billion appropriation for fiscal year 1996 was used to pay about 6 million people and about 17 million invoices charged to nearly 12 million contracts. The appropriation also supported the operation of 13 DBOF (now working capital fund) business areas such as depot maintenance, commissaries, distribution depots, and DFAS. In addition, in fiscal year 1996, DOD received about $10 billion through its foreign military sales programs and about $12 billion through the operation of base activities such as child care facilities, golf courses, and the Armed Forces Exchanges. To process financial transactions and account for the receipt and expenditure of funds, DFAS and military services’ finance and accounting operations are generally divided into nine functional activities. Table 5 lists these activities, the reported number of DFAS personnel involved in the activity, and the reported total cost for DFAS to process the transactions in fiscal year 1996. The military services were unable to provide us with comparable information. A more detailed description of the sources and uses of DOD funds and the finance and accounting responsibilities of DFAS and the military services is presented in appendix I. Agency Comments We requested comments on a draft of this report from the Secretary of Defense. On January 15, 1997, officials from the Office of the Under Secretary of Defense Comptroller/Chief Financial Officer and representatives of DFAS, the Air Force, the Army, and the Navy met with us to discuss the report. In general, DOD officials agreed with our description of DOD’s finance and accounting structure and organization. They provided us with some suggested changes, which we have incorporated in our final report where appropriate. We performed our review from July 1996 through January 1997 in accordance with generally accepted government auditing standards. Appendix II contains a description of our scope and methodology. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations; Senate Committee on Armed Services; House Committee on National Security; Senate Committee on Governmental Affairs; House Committee on Government Reform and Oversight; the Director, Office of Management and Budget; the Secretary of Defense; and other interested parties. We will make copies available to others on request. If you or your staff have any questions concerning this report, please contact either James E. Hatcher on (513) 258-7959 or Geoffrey B. Frank on (202) 512-9518. Major contributors to this report are listed in appendix III. Finance and Accounting in the Department of Defense This appendix provides an overview of the Department of Defense’s (DOD) finance and accounting operations. Accounting in the Department of Defense DOD has focused its accounting operations primarily on monitoring and controlling the obligation and expenditure of budgetary resources. As discussed in the following sections, DOD carries out these accounting operations for four types of funds —general, working capital, nonappropriated, and security assistance. With the enactment of the Chief Financial Officers Act (CFO) of 1990, the Congress called for audited agency financial statements that would more fully disclose a federal entity’s financial position and results of operations beginning with fiscal year 1996. Such statements are intended to provide for (1) better information for more informed decisions on allocation of budgetary resources and (2) an annual assessment of an agency’s financial performance, including the effectiveness of its execution of its stewardship responsibilities. DOD officials have forthrightly acknowledged that serious financial management problems severely hamper their ability to effectively carry out the full range of accounting and financial reporting responsibilities called for in the CFO Act. DOD has struggled to put in place the financial management operations and controls required to produce the information it needs to ensure adequate accountability and to support decision making. For example, few of DOD’s accounting systems are now integrated with its finance systems or with other systems or databases relied on to carry out its accounting and financial reporting responsibilities. Consequently, DOD prepares required financial reports to account for an estimated 80 percent of its physical assets based on management systems that were not intended for such accounting and financial reporting. The absence of a fully integrated general ledger-controlled system necessitates DOD’s reliance on labor-intensive, error-prone processes to ascertain whether all required items are accounted for and reported. Largely as a result of the CFO Act and other recent legislative initiatives directed at increasing financial management discipline throughout the federal government, DOD has recently begun efforts to broaden the focus of and to bring greater discipline to its accounting operations. DOD’s Chief Financial Officer stated that the CFO Act “has contributed to the recognition and understanding of the scope and depth of the financial management problems that DOD faces and has defined a standard by which the Department can measure its progress.” DOD has characterized its blueprint for financial management reform as the most comprehensive reform of financial management systems and practices in its history. In its efforts to improve its accounting activities, DOD is guided by a set of comprehensive standards that were developed by the Federal Accounting Standards Advisory Board. This Board, which was established in October 1990 by the Comptroller General of the United States, the Director of the Office of Management and Budget, and the Secretary of the Treasury Department, recommends accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, and other users and comments from the public. The Office of Management and Budget, Treasury, and GAO then decide whether to adopt the recommended standards; if they do, the standards are published by the Office of Management and Budget and GAO and become effective. Recently, a set of comprehensive accounting standards was approved by the three agencies. The new accounting standards and accompanying reporting concepts are central to effectively meeting the financial management improvement goals of the CFO Act of 1990, as amended. Also, improved financial information is necessary to support the strategic planning and performance measurement requirements of the Government Performance and Results Act of 1993. DOD Accounting Focuses on Four Types of Funds DOD accounting personnel are responsible for accounting for funds received through congressional appropriations, the sale of goods and services by working capital fund businesses, revenue generated through nonappropriated fund activities, and the sales of military systems and equipment to foreign governments or international organizations. Figure I.1 shows the types of funds and the sources and uses of the funds. General Funds Working Capital Funds General Funds General funds, the largest category of funds the Defense Finance and Accounting Service (DFAS) must account for, involve monies provided to DOD through congressional appropriations for military personnel; operation and maintenance; military construction; procurement; and research, development, test and evaluation. The Congress appropriated over $240 billion to DOD for fiscal year 1996. Because some of these appropriations involve multiyear funds, DFAS accounted for $338.5 billion in obligated and unobligated balances in general funds monies during fiscal year 1996. Working Capital Funds As of September 30, 1996, DFAS was required to account for $74.6 billion in obligated and unobligated balances generated by 13 working capital fund (formally DBOF) business areas. These business areas include such activities as depot maintenance, commissaries, distribution depots, and DFAS. In general, these business activities are intended to operate by selling goods and services to the military services and defense agencies at the cost incurred in providing the good or service. Many of the services provided through these business areas, such as the overhaul of ships, tanks, and aircraft, are essential to maintaining the military readiness of our country’s weapon systems. Working capital fund customers pay for the goods and services, primarily, with operations and maintenance funds appropriated by the Congress. Nonappropriated Funds DOD’s nonappropriated funds result primarily from the sale of goods and services to DOD military personnel, their dependents, and other qualified persons. Nonappropriated fund activities are divided into two major types—morale, welfare, and recreation activities and the Armed Forces Exchanges. In fiscal year 1995, DOD reported morale, welfare, and recreation activities and Armed Forces Exchanges revenues of $2.5 billion and $9.4 billion, respectively (according to a DOD official, 1996 revenues are expected to be about the same). DFAS, however, has accounting responsibility for only a limited portion of the nonappropriated activities. In fiscal year 1996, DFAS accounted for about $500 million in nonappropriated funds. Morale, welfare, and recreation activities are essentially small businesses such as libraries, gyms, golf courses, child care centers, and officers’ clubs that operate at numerous military installations worldwide. Armed Forces Exchanges are located on military installations worldwide and operate similarly to commercial retail outlets. The exchanges offer a variety of goods and services from military uniforms to fast food. DFAS has accounting responsibility only for a portion of the Army morale, welfare, and recreation workload. The Air Force, the Navy, and the Marine Corps account for these activities through their own nonappropriated fund organizations that are not part of the military service finance and accounting offices. The Armed Forces Exchanges are not included in DFAS’ or the military services’ finance and accounting office workload. Security Assistance Funds DOD also has responsibility for security assistance funds used for congressionally approved sales of military weapon systems and equipment to foreign governments. In some cases, funds accounted for in the security assistance program are received from foreign governments. In addition, the Congress appropriates funds that countries can use as loans or grants to make these purchases. In fiscal year 1996, DOD reported that the security assistance program generated almost $10 billion in new sales. Because many foreign military sales involve procurements over a number of years, in total, DFAS accounted for about $28 billion in obligated and unobligated balances in security assistance funds in fiscal year 1996. Finance Activities in DOD DOD’s finance activities generally involve paying the salaries of its employees, paying retirees and annuitants, reimbursing its employees for travel-related expenses, paying contractors and vendors for goods and services, and collecting debts owed to DOD. This section describes DFAS’ and the military services’ involvement in each of these activities. Civilian and Military Payroll Includes 28 locations and 21 Foreign National Civilian pay systems. Currently, DFAS pays the salaries of 826,000 civilians and about 3 million military personnel. In order for DFAS to pay DOD personnel, it receives information from three sources—military and civilian personnel offices, customer service representatives, and field finance offices or timekeepers within the employee’s unit. Figure I.2 shows an overview of the process by which DFAS obtains information to disburse and account for salary payments made to all DOD employees. The civilian and military pay processes begin with the military service’s personnel office establishing a record in its personnel system for a new hire or recruit by entering personal data such as name, address, and salary. Since the majority of the military services’ personnel systems are not integrated with the payroll systems DFAS uses, entitlement data are sent to DFAS payroll systems through an electronic interface. This interface allows DFAS to establish a pay account for the civilian or military employee. Throughout a person’s employment with DOD, timekeepers, who are usually administrative support personnel or supervisors in a military unit or office, or field finance office staff, submit time and attendance information directly to DFAS. This information is used by DFAS to compute the amount each employee should be paid. After payments are made, the payroll system transmits disbursement information to DFAS accounting units where accounting records are updated and management and budgetary reports are distributed to DOD and external agencies. DFAS also receives information that affects civilian and military pay from customer service representatives. DFAS and the military services’ finance personnel share the responsibility of providing customer service to civilian employees and military members. Customer service duties include input of employee initiated transactions such as bonds, tax withholdings, and address changes; resolving pay-related problems; and responding to inquiries on all aspects of the payment process, such as pay computation and the recording and balancing of annual and sick leave. Retiree/Annuitant Payroll DFAS assumed retiree and annuitant pay responsibilities from the military services upon its establishment in 1991. In fiscal year 1996, DFAS processed payments to about 2 million retirees and annuitants. Figure I.3 provides an overview of the retiree and annuitant payroll process, identifying duties specific to DFAS and the military services. Military service The military services’ personnel offices process the paperwork required for establishing a retiree pay account. This information is sent electronically to the DFAS Cleveland center where personnel in retired pay operations verify that the retiree’s account has been deleted from the military pay systems (to avoid dual payments to the retiree); compute the retiree’s pay; disburse payment to the retiree; and forward pay information to a DFAS accounting unit that updates accounting records and distributes management and budgetary reports. Upon receipt of a death notice, retired pay operations personnel in Cleveland will suspend or terminate the retirement pay account and electronically transfer the case to the Denver center. Denver personnel in the annuity pay office maintain the annuitant’s pay account, issue surviving annuity payment, provide customer service support, and update accounting records. These personnel also annually verify the annuitant’s eligibility status. Factors that affect entitlement eligibility include, but are not limited to, changes in Social Security benefits, remarriage, and age of children. Travel Payments The travel payment process for both DOD civilian and military employees can be broken down into three stages—travel authorization, actual travel, and travel settlement. Military service finance personnel are involved in the travel authorization process and, in some cases, the travel settlement process. DFAS performs the majority of the responsibilities in the travel settlement step in which the traveler is reimbursed. Annually, DFAS processes about 2.1 million travel settlements. Figure I.4 provides an overview of the travel payment process, distinguishing between activities performed by DFAS and the military services. Military service The travel pay process begins when a DOD employee or supervisor identifies a need for travel. The employee prepares and submits a travel request and cost estimate to the appropriate superior for approval. The administrative support staff within the organization reviews the approved request, obligates funds, and issues a travel order. The administrative support staff includes personnel who have authority to input obligations into the record and may, for example, be personnel in the finance, resource management, or budget offices. At this time, the employee makes travel arrangements and may receive a travel advance through the use of an official government travel card or, when no other means is available, from the appropriate disbursement office. Upon completion of travel, the employee submits a travel voucher to his/her supervisor for reimbursement of expenses, attaching supporting documentation such as receipts. Once the supervisor approves the claim, it is sent to either a DFAS travel pay office or the military service’s finance office where the traveler’s entitlement is computed and an audit is conducted. After entitlement is computed, DFAS or the appropriate military disbursement office makes payment, and DFAS updates the accounting records to reflect the disbursement. Contractor, Vendor, and Transportation Payments DOD finance and accounting personnel are also responsible for making payments to contractors for goods and services such as the production of weapon systems, the purchase of computer equipment, and the shipment of freight and personal property. DFAS has the primary responsibility for processing the transactions, paying the contractor or vendor, and accounting for the disbursement of funds. Military service finance personnel are involved to the extent that they verify that funds are available for use and they enter information into accounting systems to show that funds have been committed or obligated for various goods and services. In fiscal year 1996, DFAS employees made payments on approximately 17 million invoices submitted by contractors and vendors. As shown in figure I.5, while variations exist, the process of acquiring goods and services starts outside of the finance and accounting community, usually with a program manager issuing a request for a procurement of an item or the shipment of freight. Once a requirement for a good or service has been identified, personnel from a military service finance office are contacted to ensure that funds are available for use. If funds are available, the finance personnel set up a commitment on their accounting system. If the supply office has the needed item, it is issued to the requestor. If it is not available through a supply office, the contracting office awards a contract for high-dollar value items or the military service finance office establishes a purchase order for lower value items. For the movement of freight and personal property, DOD either provides the service using its own resources or generates a government bill of lading for the service. Once a supply item is ordered or service has been contracted for, the vendor delivers or performs the service and sends an invoice to the appropriate DFAS office for payment. A receiving report is sent by the requestor to the same office to show that the delivery was received. Personnel at each DFAS location are responsible for matching contract, invoice, and receiving report information prior to making a payment to a contractor/vendor. After a payment is made, accounting personnel at the operating locations are responsible for activities such as matching payment information against obligations and providing status of funds information to the military services. Debt Management $183 million Federal law requires that all government agencies pursue collection action against individuals or contractors that owe the government money. Within DOD, these debts can result from a wide variety of transactions such as defaulted loans (education or small business) or for various overpayments of pay and benefits. If an individual is employed by DOD or receiving any compensation payment, the military service finance offices attempt to collect the money or process an offset against the individual’s pay account. If the individual is no longer employed by DOD or is not receiving any compensation payment, it is considered an out-of-service debt and DFAS personnel are responsible for collecting the debt. DFAS is also responsible for collecting all debts owed by contractors. As of September 30, 1996, about 319,000 military and civilian debtors owed DOD $464 million and approximately 2,500 contractors owed DOD about $3.5 billion. DFAS personnel closed about 116,000 cases as of the end of fiscal year 1996 during which time they collected approximately $238 million. The military services perform debt management activities at each of their installations. However, we were unable to obtain information related to the number of cases that were processed during fiscal year 1996. Figure I.6 provides an overview of the process used by DOD to collect debts. Upon the initial identification of a debt, many military installation-level organizations, such as a hospital, attempt to collect the debt. If the debt is determined to be uncollectible and is owed by a contractor or someone no longer working for DOD, it is sent to a DFAS center for collection. DFAS is required to send three letters—30 days apart—to debtors in an attempt to collect the money. Then, if the money has not been collected, it can be turned over to a private agency for collection or to the Internal Revenue Service for a potential tax refund offset. The debt may also be sent to the Department of Justice for legal action if research shows the debtor has the ability to pay. If DFAS determines that an individual debtor is employed by another federal agency, it can obtain payment for the outstanding debt through payroll deductions. At any time during the process, the debt can be collected in full, compromised to a lesser amount with the remainder written off, or written off in total if the debt falls below established dollar thresholds. DFAS updates its accounting records to reflect any of these events and reports the information back to the military services. If any debt is collected, it is refunded to the military service that incurred the debt or deposited into the Treasury Miscellaneous Receipts Account. Objective, Scope, and Methodology The Subcommittee on Defense, Senate Committee on Appropriations, asked us to provide an overview of DOD finance and accounting activities. We focused our work on describing how DOD is organized to perform finance and accounting, the size of the finance and accounting infrastructure, and the various activities that are performed by DFAS and the military services. To determine how DOD is organized to perform finance and accounting activities, we reviewed documents that discussed the rationale for centralizing accounting activities within DFAS and DFAS and military service finance and accounting organizational charts. We also discussed the organizational structure with officials at DFAS Headquarters and the military services’ Office of the Assistant Secretary for Financial Management. To determine the current size of DOD’s finance and accounting infrastructure, we obtained and reviewed budget, personnel, workload, and cost figures provided by DFAS. The military services did not have comparable information readily available. Therefore, officials from the Army’s and the Marine Corps’ financial management offices sent out a data call to their respective installations to obtain information on the number of personnel currently performing finance and accounting activities. The Air Force updated personnel figures obtained from DOD’s central personnel database. The Navy updated its personnel figures using a variety of Navy reports and DOD’s central personnel database. From these numbers, each of the services estimated the amount of money it spends on personnel costs to perform finance and accounting activities. Given our overall assignment objectives and the descriptive nature of our report, we did not verify the data provided to us by either DFAS or the military services. For purposes of this report, we did not obtain information from defense agencies related to how many personnel are currently performing finance and accounting activities. This decision was based on the lack of a single focal point within DOD that could provide us with the needed information from approximately 24 defense agencies and the small number of personnel involved with defense agency finance and accounting activities prior to the establishment of DFAS in 1991. To determine the type of activities DOD finance and accounting personnel are responsible for performing, we reviewed DOD’s Chief Financial Officer Financial Management 5-Year Plan, the DFAS Customer Service Plan, the responsibility matrices negotiated by DFAS with each of the military services, and work flow descriptions for each finance and accounting activity. To supplement information included in formal reports, we interviewed headquarters and field officials at the following locations: DFAS headquarters in Arlington, Virginia; DFAS centers in Cleveland, Ohio; Columbus, Ohio; Denver, Colorado; and Indianapolis, Indiana; the Army’s and the Navy’s Office of the Assistant Secretary for Financial Management in Arlington, Virginia; the Air Force’s Secretary of the Air Force (Financial Management and Plans) in Arlington, Virginia; and the Marine Corps’ Office of the Deputy Chief of Staff for Program and Resources in Arlington, Virginia. Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Accounting and Information Management Division, Washington, D.C. Related GAO Products Contractor Pay Financial Management: DOD Needs to Lower the Disbursement Prevalidation Threshold (GAO/AIMD-96-82, June 11, 1996). DOD Procurement: Millions in Contract Payment Errors Not Detected and Resolved Promptly (GAO/NSIAD-96-8, Oct. 6, 1995). Financial Management: Status of Defense Efforts to Correct Disbursement Problems (GAO/AIMD-95-7, Oct. 5, 1994). DOD Procurement: Overpayments and Underpayments at Selected Contractors Show Major Problem (GAO/NSIAD-94-245, Aug. 5, 1994). DOD Procurement: Millions in Overpayments Returned by DOD Contractors (GAO/NSIAD-94-106, Mar. 14, 1994). Financial Management: Navy Records Contain Billions of Dollars in Unmatched Disbursements (GAO/AFMD-93-21, June 9, 1993). Financial Management: Air Force Systems Command Is Unaware of Status of Negative Unliquidated Obligations (GAO/AFMD-91-42, Aug. 29, 1991). Defense Business Operations Fund Defense Business Operations Fund: DOD Is Experiencing Difficulty in Managing the Fund’s Cash (GAO/AIMD-96-54, Apr. 10, 1996). Defense Business Operations Fund: Management Issues Challenge Fund Implementation (GAO/AIMD-95-79, Mar. 1, 1995). Defense Business Operations Fund: Improved Pricing Practices and Financial Reports Are Needed to Set Accurate Prices (GAO/AIMD-94-132, June 22, 1994). Financial Management: DOD’s Efforts to Improve Operations of the Defense Business Operations Fund (GAO/T-AIMD/NSIAD-94-146, Mar. 24, 1994). Financial Management: Status of the Defense Business Operations Fund (GAO/AIMD-94-80, Mar. 9, 1994). Financial Management: Opportunities to Strengthen Management of the Defense Business Operations Fund (GAO/T-AFMD-93-6, June 16, 1993). Financial Management: Defense Business Operations Fund Implementation Status (GAO/T-AFMD-92-8, Apr. 30, 1992). Defense’s Planned Implementation of the $77 Billion Defense Business Operations Fund (GAO/T-AFMD-91-5, Apr. 30, 1991). Financial Management Financial Management: DOD Inventory of Financial Management Systems Is Incomplete (GAO/AIMD-97-29, Jan. 31, 1997). DOD Accounting Systems: Efforts to Improve System for Navy Need Overall Structure (GAO/AIMD-96-99, Sept. 30, 1996). Navy Financial Management: Improved Management of Operating Materials and Supplies Could Yield Significant Savings (GAO/AIMD-96-94, Aug. 16, 1996). CFO Act Financial Audits: Navy Plant Property Accounting and Reporting Is Unreliable (GAO/AIMD-96-65, July 8, 1996). CFO Act Financial Audits: Increased Attention Must Be Given to Preparing Navy’s Financial Reports (GAO/AIMD-96-7, Mar. 27, 1996). Financial Management: Challenges Facing DOD in Meeting the Goals of the Chief Financial Officers Act (GAO/T-AIMD-96-1, Nov. 14, 1995). Financial Management: Challenges Confronting DOD’s Reform Initiatives (GAO/T-AIMD-95-146, May 23, 1995). Financial Management: Challenges Confronting DOD’s Reform Initiatives (GAO/T-AIMD-95-143, May 16, 1995). Financial Management: Control Weaknesses Increase Risk of Improper Navy Civilian Payroll Payments (GAO/AIMD-95-73, May 8, 1995). Financial Management: Financial Control and System Weaknesses Continue to Waste DOD Resources and Undermine Operations (GAO/T-AIMD/NSIAD-94-154, Apr. 12, 1994). Financial Management: Strong Leadership Needed to Improve Army’s Financial Accountability (GAO/AIMD-94-12, Dec. 22, 1993). Financial Management: Army Real Property Accounting and Reporting Weaknesses Impede Management Decision-Making (GAO/AIMD-94-9, Nov. 2, 1993). Financial Management: Defense’s System for Army Military Payroll Is Unreliable (GAO/AIMD-93-32, Sept. 30, 1993). Financial Management: DOD Has Not Responded Effectively to Serious, Long-Standing Problems (GAO/T-AIMD-93-1, July 1, 1993). Financial Audit: Examination of the Army’s Financial Statements for Fiscal Years 1992 and 1991 (GAO/AIMD-93-1, June 30, 1993). Financial Audit: Examination of the Army’s Financial Statements for Fiscal Year 1991 (GAO/AFMD-92-83, Aug. 7, 1992). Financial Management: Immediate Actions Needed to Improve Army Financial Operations and Controls (GAO/AFMD-92-82, Aug. 7, 1992). Financial Audit: Aggressive Actions Needed for Air Force to Meet Objectives of the CFO Act (GAO/AFMD-92-12, Feb. 19, 1992). Financial Audit: Status of Air Force Actions to Correct Deficiencies in Financial Management Systems (GAO/AFMD-91-55, May 16, 1991). Financial Audit: Financial Reporting and Internal Controls at the Air Logistics Centers (GAO/AFMD-91-34, Apr. 5, 1991). Financial Audit: Air Force’s Base-Level Financial Systems Do Not Provide Reliable Information (GAO/AFMD-91-26, Jan. 31, 1991). Financial Audit: Financial Reporting and Internal Controls at the Air Force Systems Command (GAO/AFMD-91-22, Jan. 23, 1991). Locations Performing Finance and Accounting Activities DOD Infrastructure: DOD Is Opening Unneeded Finance and Accounting Offices (GAO/NSIAD-96-113, Apr. 24, 1996). DOD Infrastructure: DOD’s Planned Finance and Accounting Structure Is Not Well Justified (GAO/NSIAD-95-127, Sept. 18, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Defense Infrastructure: Enhancing Performance Through Better Business Practices (GAO/T-NSIAD/AIMD-95-126, Mar. 23, 1995). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Defense's (DOD) management of its financial operations, focusing on: (1) DOD's rationale for creating the Defense Finance and Accounting Service (DFAS); (2) the current size of the DOD finance and accounting infrastructure; and (3) the various finance and accounting activities performed by DOD personnel. GAO reported that: (1) Before fiscal year 1991, the military services and defense agencies independently managed their finance and accounting operations. Because these decentralized operations were highly inefficient and failed to produce reliable information for decision makers, DOD created DFAS to consolidate, standardize, and integrate finance and accounting operations. DFAS inherited 26,000 finance and accounting personnel but about 18,000 personnel remained with the military services to perform managerial accounting and customer service activities at local installations and bases. (2) By the end of fiscal year 1998, DFAS expects that all 332 installation-related finance and accounting offices will be closed and their operations transferred to 5 large centers and no more than 21 new operating locations. This consolidation will help DFAS, between fiscal years 1996 and 2000, reduce its budget from $1.64 billion to about $1.47 billion (in constant 1996 dollars); its personnel from 23,500 to about 20,000; and the number of finance and accounting systems from 217 to about 110. The military services reported that they still have close to 17,000 personnel in their finance and accounting network and are not planning any specific reductions. (3) DOD's finance and accounting activities are generally divided into 9 functional areas (accounting, payroll, contract payments, etc.). Improving these areas is an enormous task, involving the replacement of many antiquated systems and processes. The task is even move difficult considering the volume of transaction that must be continued while improvements are being made. Annually, for example, DOD disburses around $260 billion on 17 million invoices, 6 million payroll accounts, and 2 million travel vouchers.
Background Regulatory agencies have authority and responsibility for developing and issuing regulations. The basic process by which all federal agencies develop and issue regulations is spelled out in the APA. This act establishes procedures and broadly applicable federal requirements for informal rulemaking, also known as notice and comment rulemaking. Among other things, the APA generally requires agencies to publish a notice of proposed rulemaking in the Federal Register. After giving interested persons an opportunity to comment on the proposed rule by providing “written data, views, or arguments,” the agency may then publish the final rule. In addition to the requirements under the APA, an agency may also need to comply with requirements imposed by other statutes. The APA has been in place for more than 60 years, but most other statutory requirements on rulemaking have been imposed more recently. OMB is responsible for the coordinated review of agency rulemaking to ensure that regulations are consistent with applicable law, the President’s priorities, and the principles set forth in executive orders, and that decisions made by one agency do not conflict with the policies or actions taken or planned by another agency. OMB also provides guidance to agencies. Some form of centralized review of rules by the Executive Office of the President has existed for over 30 years. OIRA was created within OMB by the PRA and given substantive responsibilities for reviewing and approving agencies’ information collection requests. Since 1981, various executive orders also gave OIRA substantive regulatory review responsibilities. OIRA’s current regulatory review responsibilities are detailed in Executive Order 12866 related to regulatory planning and review. The order states that OIRA is to be the “repository of expertise concerning regulatory issues.” Under Executive Order 12866, OIRA reviews agencies’ significant regulatory actions and is generally required to complete its review within 90 days after an agency formally submits a draft regulation. Each agency provides OIRA a list of its planned regulatory actions, indicating those that the agency believes are significant. After receipt of this list, the Administrator of OIRA may also notify the agency that OIRA has determined that a planned regulation is a significant regulatory action within the meaning of the Executive Order. The order defines significant regulatory actions as those that are likely to result in a rule that may 1. have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities; 2. create a serious inconsistency or otherwise interfere with an action taken or planned by another agency; 3. materially alter the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or 4. raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in Executive Order 12866. The order further directs executive branch agencies to conduct a regulatory analysis for economically significant regulations (generally those rules that have an annual effect on the economy of $100 million or more). OIRA historical data show that since 1994 (the first full calendar year that Executive Order 12866 was in effect), approximately 15 percent of the rules that OIRA reviewed were economically significant. For other significant rules, the order requires agencies to provide an assessment of the potential costs and benefits of the rule. The executive order also contains several transparency provisions that require both OIRA and agencies to disclose certain information about the OIRA review process. For example, the order requires agencies to publicly identify substantive changes made to the draft at OIRA’s suggestion, and it requires OIRA to disclose information about communications between OIRA and persons not employed by the executive branch pertinent to rules under OIRA’s review. The transparency requirements are discussed in more detail later in this report. In addition to the responsibilities that OIRA exercises, OMB also provides guidance to agencies on regulatory requirements. In 2003, for example, OMB issued revised analytical guidelines for agencies to use in assessing the regulatory impact of economically significant regulations in OMB Circular No. A-4. In issuing the guidelines, OMB cited several significant changes from its previous economic guidance. They included placing a greater emphasis on cost-effectiveness analysis, using formal probability analysis to assess uncertainty for rules with more than a billion-dollar annual impact on the economy, and conducting a more systematic evaluation of qualitative as well as quantified benefits and costs. In addition, OMB’s guidelines recommend that agencies estimate net benefits using a range of discount rates instead of a single discount rate. In 2004, OMB issued its Final Information Quality Bulletin for Peer Review, which established governmentwide guidance for conducting peer reviews of government science documents. The bulletin directs agencies, including independent regulatory agencies, to subject “influential scientific information,” such as original data and formal analytic models used in regulatory impact assessments, to an appropriate level of peer review. New administrations generally reexamine the rulemaking process and OMB’s role in the process. Most recently, in a memorandum of January 30, 2009, President Obama directed the Director of OMB, in consultation with representatives of regulatory agencies, to produce within 100 days a set of recommendations for a new executive order on federal regulatory review. The memorandum stated that the recommendations should offer suggestions for, among other things, the relationship between OIRA and the agencies; provide guidance on disclosure and transparency; encourage public participation in agency regulatory processes; offer suggestions on the role of cost-benefit analysis; address the role of distributional considerations, fairness, and concern for the interests of future generations; identify methods of ensuring that regulatory review does not produce undue delay; clarify the role of the behavioral sciences in formulating regulatory policy; and identify the best tools for achieving public goals through the regulatory process. Agencies Have Limited Data on the Time and Resources Used to Address Regulatory Requirements in Their Rulemaking Processes All agencies’ rulemaking processes share three basic steps or phases: initiation of rulemaking actions, development of proposed rules, and development of final rules. Built into agencies’ rulemaking processes are opportunities for internal and external deliberations and reviews. Figure 1 provides an overview of these regulatory development steps. During initiation, agency officials identify issues that may potentially result in a rulemaking. Potential rulemakings may result from statutory requirements or issues identified through external sources (for example, public hearings or petitions from the regulated community) or internal sources (for example, management agendas). During this phase, agencies gather information that would allow them to determine whether a rulemaking is needed and to identify potential regulatory options. At this time, the agencies will also identify the resources needed for the rulemaking and may draft concept documents to present to agency management that summarize the issues, present the regulatory options, and identify needed resources. Agency officials reported that this initial work on a rule is of indeterminate length and sometimes constitutes a major portion of the process. While the point at which a rulemaking officially commences may vary by agency and rule, as a general matter, rulemaking begins only after management receives, reviews, and approves the concept document. At the latest, according to OIRA, the rulemaking will officially commence when agency officials assign a Regulation Identifier Number (RIN) for the proposed rule. The second phase of the rulemaking process starts when an agency begins developing the proposed rule. During this phase, an agency will draft the rule, including the preamble (which is the portion of the rule that informs the public of the supporting reasons and purpose of the final rule) and the rule language. The agency will also begin to address analytical and procedural requirements in this phase. Agency officials pointed out that these initial analyses form the basis for other analyses completed later in the process, including those prepared to address statutory and executive order requirements. Agency officials stated that development of a rule is a coordinated effort, with economists, lawyers, and policy and subject matter experts contributing to individual rulemakings. Also built into this phase are opportunities for internal and external deliberations and reviews, including official management approval. OIRA may be involved informally at any point during the process. For each rule identified by the agency as, or determined by the Administrator of OIRA to be, a significant regulatory action, the agency submits the rule to OIRA for formal review— including the coordination of interagency review. After OIRA completes its review and the agency incorporates resulting changes, the agency publishes the proposed rule in the Federal Register for public comments. In the third phase of the process, the development of the final rule, the agency repeats, as needed, the steps used during the development of the proposed rule. Once the comment period closes, the agency responds to the comments either by modifying the rule to incorporate the comments or by addressing the comments in the final rule. This phase also includes opportunities for internal and external review. Again, if the agency determines that the rule is significant or at OIRA’s request, the agency submits the rule to OIRA for review before publication of the final rule. In the event that OIRA’s review results in a change to the final rule, the agency will revise the rule before publishing it in the Federal Register. Officials noted that addressing analytical and procedural requirements is faster with a draft of the proposed rule in hand, but analyses may need to be modified if public comments change the rule substantially. The final rule as published in the Federal Register includes the date that the rule becomes effective. While the agencies share these basic process steps, there are inter- and intra-agency variations in the management of the rulemaking process. For example, EPA’s OW generally designates one rulemaking team member as the point of contact through the development of the rule. Officials at FDA stated that their point of contact may change during the course of the rulemaking based on the rule’s development phase; as the office working on the rule within the agency changes so does the point of contact. Agencies also have differing numbers of required internal reviews, and they may complete tasks within a phase in a different sequence. Agencies Identified Milestones for Regulatory Development Officials we met with described agency-specific processes for regulatory development and addressing the procedural and analytical requirements that applied to their respective rulemakings. These included identification of “milestones,” significant events or stages in the agency-specific process. Officials also identified some milestones common to the rulemaking process that apply across the federal government, such as publishing the proposed and final rule. In addition, officials at the agencies we spoke with identified the following agency-specific internal milestones in their regulatory development process. DOT officials at FAA and NHTSA identified common milestones, including development of a draft concept, management reviews within administrations, review by the Secretary’s Office, and external review. EPA identified 14 milestones for nonroutine rulemakings from initiation to publication of the proposed rule, including assigning a working group, development and approval of an Analytic Blueprint and management reviews. After the proposed rule is published, EPA tracks an additional 4 to 5 milestones to develop the final rule. FDA officials emphasized that regulatory development is similar throughout FDA. CFSAN and CDER used milestones such as assigning a working group, drafting, and conducting analyses and management clearances. SEC officials stated that CF and IM identified as common milestones generation (typically including public input), drafting, and approval by the commissioners. During the course of our review, only DOT provided us with data that showed it routinely tracked these milestones. In comments on our draft report, EPA and FDA subsequently provided some documentation and data that showed that these agencies also routinely tracked milestones. However, we had requested this information during our review to address our second research question, but the information was not forthcoming. Also, they did not provide this information at the conclusion of our review either in response to our statements of facts to the agencies or at our exit conference with the agencies. Because this information was not provided at the time of our review, we did not have the opportunity to discuss how the information is used, whether it is useful, and, most importantly, if it could be used to respond to our report objectives. We did not audit the new information provided because of an approaching deadline for OMB to provide recommendations to the President for a new executive order on federal agency regulatory review. DOT sets target dates for major milestones, such as when the rule is scheduled to move to OIRA for review. The agency uses the Rulemaking Management System to monitor both the target and actual dates for these major milestones in the development of rules for internal management decision-making purposes. This allows DOT regulatory development staff and managers to identify a rule’s status and determine if the rule is on or behind schedule, based on target dates. Some of the information captured in the system includes the stage of the rulemaking (proposed rule, final rule), a schedule of milestones with target and actual dates, and an explanation for any delay. For significant regulatory actions, DOT makes some milestone tracking information publicly available through the Report on DOT Significant Rulemakings available on the agency’s web site. Among the objectives that DOT officials attributed to their tracking and reporting efforts were that they provide opportunities to assess their schedule estimates and improve internal and external accountability. An official noted, for example, that tracking and posting the information helped the agency identify best estimates of schedules. EPA uses an internal tracking database—Rule and Policy Information and Development System (RAPIDS)—that contains both projected and actual dates for meeting major milestones in rule development. This database provides background information and a timetable on each stage of the rulemaking, including a schedule of the milestones with both projected and actual dates. EPA uses the data in RAPIDS to develop management reports used by EPA managers and executives as a planning and tracking tool. The reports also serve as a tool to improve the regulatory development process. EPA makes this tracking information available to the public on a quarterly basis. FDA also tracks regulatory milestones through its Federal Register Document Tracking System (FRDTS). FRDTS is an internal, Web-based integrated system that allows FDA to track the preparation and approval process of documents to be submitted to the Federal Register for publication. Tracking the rulemaking approval process is done for all rulemakings, however the system only tracks milestones for the latter stages of the rulemaking’s development and clearing process. FDA management uses the information in FRDTS as one tool in the management of its regulatory development process. SEC did not have such systematic tracking and reporting of its scheduled and actual regulatory milestones. However, implementing such a system is consistent with standards for internal control. According to the standards, for an agency to run and control its operations, it must have relevant, reliable information relating to external as well as internal events. Moreover, an acknowledged expert on the regulatory process has stated that a myriad of formal requirements and political expectations requires sophisticated management of the rulemaking process. Among other functional requirements, he noted that scheduling and budgeting for rulemaking are useful tools for officials to manage regulation development and control the resources needed to complete a rule. Monitoring and assessing actual performance against planned targets, identifying reasons for missed targets, and adjusting resource requirements to fit conditions based on actual experience are among the ways that agencies could use their rulemaking plans and milestones as management control and accountability tools. Length of Time Required to Issue a Rule Varies by Agency and Rule with Few Common Characteristics We found variation in length of time required for the development and issuance of final rules both within and among agencies. In general, agency officials agreed that the publication of the final rule marked the end of the rulemaking process. In contrast, identifying when a rulemaking begins is less definite. All agencies identified milestones that marked the initiation of a rulemaking in their agencies, but also asserted that agency staff sometimes worked on certain issues related to the rulemaking years before commencement of the actual rulemaking, either as part of earlier, related rulemakings or policy development for the rule. Based on agency milestones, officials we spoke with at three of the four agencies provided estimates for the length of time for an average rulemaking. FDA officials estimated that a straightforward rulemaking may take up to 3½ to nearly 4 years from initiation to final publication. DOT officials estimated approximately 1-½ years from the end of the public comment period following the publication of the proposed rule to final rule. SEC officials estimated that some rules are completed within 6 months of publication of a proposed rule to final rule. EPA officials declined to provide an estimate for an average rulemaking at their agency, stating that there is too much variation. However, some agency officials emphasized that the average time required to issue any given rule could vary from these estimates, as illustrated by the 16 case-study rules we reviewed (see fig. 2). Using two rulemaking milestones common among federal agencies— publication in the Federal Register of the proposed rule and publication of the final rule—the length of time for our 16 case-study rules ranged from approximately half a year (SEC’s Mutual Fund Data Reporting rule) to nearly 5 years (FDA’s Physician Labeling rule). As an illustration of the time investment in regulatory development—as measured from agency- specific internal milestones—the entire rulemaking process for the same two rules ranged from slightly over 1 year to 13 years, respectively. Overall the average time from initiation to final publication of a rule for our 16 case-study rules was just over 4 years, with the average time for the 4 SEC rules just over 1 year, the DOT rules taking an average of just over 3 years, and the EPA and FDA rules taking longer than the overall average at 5-1/2 and 7 years, respectively. For most of our case study rules, the time to develop the proposed rule was at least as much as the time between publication of the proposed and final rules. We also found that the complexity or magnitude of a major rule also did not explain all or most variation, as some case-study rules that were not major took nearly as long or longer to be published. (For more detailed information about the timelines for each of the case-study rules, see app. II.) During our review we identified multiple factors that influence the time needed to issue a rule, including the complexity of the issues addressed by the rulemakings; prioritizations set by agency management that can change when other priorities, such as new congressional mandates, arise; and the amount of internal and external review required at the different phases of the rulemaking process. Some agency officials said that rulemaking for complex topics or for rules that raise new issues typically takes longer to complete than for routine rules. Rules that are a management priority or have a statutory or judicial deadline may move more quickly through the rulemaking process, while other rules may be set aside as agency staff members work on other things. Also, rules that require OIRA and interagency review typically need additional time for the external review process and, according to some agency officials, trigger additional internal scrutiny. The priorities and the pace of rulemaking have also been affected during transitions in presidential administrations. Since 1948, the amount of rulemaking activity has increased in the last months of every outgoing administration when the party in control of the White House changed. During every recent transition involving a change in the party controlling the White House since 1981, this has been followed by the incoming administration recommending that agencies delay effective dates for reconsideration of rules published at the end of the previous administration. Agencies Have Limited Information on Resources Used in Regulatory Development There is little rule-specific tracking of resources used in regulatory development. Agency officials were unable to identify the staffing or other resources (such as contracting costs associated with preparing expert analyses or convening public meetings) for regulatory development for all rules or for the limited number of case-study rules. As noted above, internal control standards call for relevant, reliable and timely information. Regarding staffing, such management information was not available from agencies we audited. Officials were able to generally describe how they staff rulemaking, noting that rulemaking is a coordinated effort, with many individuals from throughout the agency contributing to specific rulemakings. However, none of the agencies routinely tracked staff time associated with rulemakings or were able to provide records of staff time devoted to case-study rulemakings or supporting analyses. Moreover, agency officials stated that because many staff within the agencies with different job functions—attorneys, economists, programs staff, as well as regulatory developers—contribute to individual rules, they could not provide after-the-fact estimates. However, based on their experiences, the agencies’ officials identified ways they have explored using existing agency information for purposes of improving management of the rulemaking process. For example, according to a DOT official responsible for rulemaking, having reliable data on how long it takes to publish a proposed rule and identifying where the time is spent is critically important for reengineering the process. The official also pointed out that while the agency knows that the time of some support staff is all devoted to rulemaking, there is no data for tracking core program offices’ involvement in technical and lead roles in the rulemaking process. The official noted that these data could help determine how much of certain staff members’ time is reasonable and how many additional staff days would be needed to speed up the process. EPA officials stated that they use the tracking information in the agency’s RAPIDS system to help identify best practices as well as identify those actions that need corrective measures. FDA officials stated that they track the rulemaking process from a project management perspective. According to these officials, this allows the agency to identify areas that appear to “logjam” the process and to develop mechanisms that would improve the rulemaking process. In addition, FDA officials stated that agency management meets to discuss the major milestones reported in the FRDTS to identify areas of improvement in the regulatory development process. SEC officials also track some data from a project management perspective. Officials stated that the former SEC Chairman was interested in knowing how long it takes to conduct economic analyses. During the course of our review, EPA, FDA, and SEC did not document these claims or provide specific examples of how this information was used to improve their rulemaking process. However, in response to our draft report, the EPA and FDA provided both documentation and examples to support the information provided above. Agency officials described additional details about their tracking efforts and the limitations of this information. One DOT agency, FAA, is working on a system, called Labor Distribution Reporting (LDR) that allows all FAA managers and employees to better understand staff resources required for achieving agency goals and objectives. However, using LDR to assess the staff time devoted to rulemaking has proven challenging for FAA officials responsible for rulemaking activity. Those officials said that the data generated by LDR have not been reliable and the information does not necessarily track rulemaking activity as distinct from other staff responsibilities. Additionally, attempts to aggregate LDR information by rule proved unwieldy as these costs are imbedded in agency-wide systems that capture information at a higher level and for multiple purposes. FAA is reengineering its use of LDR by consolidating and simplifying the codes used with the goal of providing managers with reliable information on how staff time is used and how it contributes to meeting agency performance targets. SEC’s CF also tried to do more detailed staff time coding for each rulemaking project but has now instituted more generic tracking by category of activity. A CF official also pointed out that CF’s coding does not cover all groups within SEC that contribute to rulemaking projects. Another SEC official from the Division of Trading and Markets identified a similar tracking challenge. Although the division has specific rulemaking offices organized by subject area, not all the work done by these offices is related to rulemaking. Similarly, there is limited information available regarding contract costs. At three of the agencies in our case study—DOT, FDA, and SEC— analytical and procedural requirements typically are addressed in-house by agency staff. One agency, EPA, regularly supplements in-house staff responsible for regulatory development by hiring contractors to conduct analyses as part of the regulatory development process. The agency does not track these costs by rulemaking; however, EPA officials were able to identify some of the costs associated with regulatory development for four case-study rules. For the two major OW rules—the Surface Water Treatment 2 rule and the Disinfection Byproducts 2 rule—the costs related to expert advisory panels, public meetings, travel, and regulatory analyses for the microbial pathogens and disinfection byproducts cluster of rules (that includes a total of seven rules including the two case study rules) were more than $13 million. As mentioned above, EPA did not track the costs of each of these rules separately throughout the course of rule development. Identified costs for the two nonmajor rules—the Ethylene Oxide Emissions rule and the Hazardous Air Pollutants rule—were less, $100,000 and $780,000, respectively. EPA officials told us that funding for regulatory analyses comes from a central budget within the program developing the rule. As stated earlier, while it is difficult to pinpoint when the initiation phase of rulemaking begins, agency officials reported that developing proposed rules constitutes a significant portion of regulatory development, so that much of the resource investment in a rulemaking occurs prior to publication of the proposed rule. For example, NHTSA’s Fuel Economy- Light Trucks rule was just one in a series of related rulemakings that dated back to the 1970s, informed by decades of work by that agency. Also, EPA’s Disinfection Byproducts 2 rule and the Surface Water Treatment 2 rule were based on the work of an expert panel convened under the Federal Advisory Committee Act years before these rules were proposed. Another example mentioned by EPA officials was a stormwater management rule for the oil and gas industry that supplemented and was partially based on an earlier nationwide, multi-industry rulemaking. Reviews of published rules or rulemaking dockets provide little information on the resources and level of effort needed for agencies to comply with specific rulemaking requirements. As pointed out by DOT officials, neither the text of the rule nor the materials in the agency’s rulemaking docket might indicate all of the resources and the decision- making process that the agency performed to make such determinations. Specifically, they noted that dockets would not necessarily include copies of the underlying analyses if the agency concluded that the rule did not trigger that requirement. Also, each requirement that a rule triggers does not necessarily require preparation of a separate analysis. Agencies can use some analyses to address more than one requirement (for example, using one benefit-cost analysis to address multiple analytical requirements). Even if costs and resources were tracked at the individual rule level, the resources used to meet general rulemaking requirements would still need to be captured to provide a complete picture. Rulemaking is an integral part of agency business, and it would be hard to separate all that is done for rulemaking from what also contributes to other operational and policy decisions. Further, given the many other demands on scarce agency resources, the resources that would be needed to develop reliable tracking data on individual rules or requirements must be weighed against other investments. Many of the Rules GAO Reviewed Triggered Few of the Broadly Applicable Rulemaking Requirements When issuing major rules, agencies must generally comply with the APA and a number of other broadly applicable procedural and analytical requirements specified in law. However, one statutory requirement, the Unfunded Mandates Reform Act of 1995 (UMRA), does not apply to independent regulatory agencies. EPA and the Occupational Safety and Health Administration must also comply with a requirement to convene a small business advocacy review panel under the Small Business Regulatory Enforcement Fairness Act if their rules would have a significant economic impact on a substantial number of small entities— business or governmental. Further, agencies other than independent regulatory agencies are also subject to requirements in executive orders. In addition to the broadly applicable requirements, agencies may need to comply with agency- or program-specific requirements established by other statutes. Table 1 lists the 17 broadly applicable statutes and executive orders with rulemaking requirements that were cited by 10 or more major rules that we reviewed for this report. For each we provide a high-level characterization of agencies’ responsibilities under the requirement. (App. I provides additional information about these requirements.) Our review of the 139 major rules published from January 2006 through May 2008 showed that many of the procedural and analytical requirements that generally apply to the agencies were not triggered by specific rules. An agency may not need to include specific analyses if the substance of the rule or exceptions, exclusions, and thresholds in the requirement lead the agency to determine that the requirement was not triggered by a specific rule. For example, if the substance of a rule includes no new information collections, the agency would not have to estimate burden hours under the PRA. Exceptions and exclusions can be either categorical or statute specific. As an example of a categorical exception, the APA notice-and-comment requirements do not apply to any rule regarding a military or foreign affairs function of the United States. Also, the RFA and UMRA requirements do not apply if a rule was exempt from the APA notice-and-comment requirements. An example of a statute-specific exemption is section 1601(c) of the Farm Security and Rural Investment Act of 2002 that generally exempted rules issued by the Department of Agriculture’s Commodity Credit Corporation from the APA and the PRA requirements. The applicability of certain requirements is also limited by threshold provisions. For example, agencies only need to prepare UMRA “written statements” for rules that the agencies believe include a federal intergovernmental or private sector mandate that may result in expenditures of $100 million or more (adjusted for inflation) in any year. Taken as a whole, an agency may need to work through a series of determinations for each rule under each requirement regarding substance, exceptions, and thresholds. In the 139 major rules we reviewed, the agencies mentioned at least 29 different broadly applicable requirements, but most rules actually triggered only a handful of the requirements. Our review of the major rules showed that in addition to the procedural requirements under the APA and the CRA, the only analytical requirements triggered by 45 percent or more of all rules were the PRA, the RFA, and Executive Order 12866. Collectively, however, the requirements resulted in agencies providing at least some information on the costs, benefits, or both associated with 91 percent of the major rules. Agencies also frequently cited the UMRA and Executive Order 13132 related to federalism, but the rules seldom triggered those requirements. Figure 3 illustrates how often the agencies stated that their rules triggered the most commonly cited analytical alytical requirements, according to our major rules reports. requirements, according to our major rules reports. Similarly, the 16 case-study rules that we reviewed triggered some, but not all, of the broadly applicable requirements. All 16 of the rules discussed requirements under the PRA and the RFA, but only 11 triggered a PRA analysis and 10 an RFA analysis. Except for the rules promulgated by SEC, all of the rules discussed UMRA and Executive Order 13132 requirements; however, only 4 rules triggered an UMRA analysis and 3 a federalism analysis. The 4 case-study rules from DOT, EPA, and FDA that were economically significant rules also included quantitative cost-benefit analyses required under Executive Order 12866. Figure 4 illustrates how often the agencies stated that the case-study rules triggered the most commonly cited analytical requirements. (See the case studies in app. II for further details on the regulatory requirements addressed by each rule.) Agencies Reported That Recent Regulatory Requirements Presented Some Challenges Initially Agency officials from case-study agencies reported that in some instances, the new requirements imposed by OMB since 2003 were challenging initially, requiring additional time and resources. However, some officials noted that these recent requirements reflected practices that some agencies had already adopted. For example, FDA had routinely circulated the regulatory impact analyses of economically significant rules for peer review before submitting the rules to OMB prior to the 2004 issuance of the Peer Review Bulletin and while these regulatory impact analyses are excluded from the requirements of the Bulletin, FDA continues to circulate them for peer review. FDA officials agreed that the revised OMB Circular No. A-4 helped to clarify expectations for their economic analyses and in some cases resulted in less time needed for OMB review and greater confidence in the regulatory choices. Initially incorporating certain aspects of the new requirement for formal probability analysis to assess uncertainty lengthened the rulemaking process for one agency. NHTSA officials reported that prior to issuance of revised Circular No. A-4—which requires formal probability analysis to assess uncertainty for rules with more than a billion-dollar annual impact on the economy—NHTSA used a simpler form of uncertainty analysis, called sensitivity analysis, rather than the more formal probability analysis. When NHTSA performed its first probability analysis under OMB Circular No. A-4, it took several weeks to complete and required contracting resources outside the agency—not a typical practice for NHTSA. In addition, to meet its statutory deadline, the agency sent the rule to OIRA for review before completing the probability analysis. In contrast, EPA had incorporated probability analysis into its regulatory development process on a case-by-case basis prior to the issuance of Circular No. A-4 and therefore did not find that requirement challenging. Officials from case-study agencies identified two long-standing analytical and procedural requirements, the PRA regarding information collections and the RFA regarding analysis of rules’ effects on small entities, as having had more significant effects on time and resources than the more recent requirements. Some officials said that these requirements add time to the rulemaking process and may even work at cross-purposes during the course of regulatory development. Agency officials at FDA and SEC reported that compliance with the PRA information collection requirements may add a year or more to the timeline of regulatory development. As a result, rather than gather new information to support a rulemaking, agency officials will sometimes rely on existing information, information available from a more limited number of sources, or information gathered through public notices. This can make it more difficult to determine the effect of a regulation on small entities that may not be represented by a small sample of interested parties or respond to public notices. FDA officials stated that the agency posts on its Web site a “Dear Colleague” letter alerting the small business community to the rulemakings listed in the semi-annual Unified Agenda and Regulatory Plan that may affect small business. This letter explains how to contact the agency and encourages small businesses to become involved early in the rulemaking process. However, it can still be difficult to determine the effect of a regulation on small entities. According to the agency officials, this requires agencies to either move forward with available information or go through time-consuming approvals for information collections under the PRA. OIRA’s Role in the Rulemaking Process Could Be More Transparent Our review of 12 DOT, EPA, and FDA rules submitted to OIRA for formal review under Executive Order 12866 indicated that for 10 of the 12 rules, the agencies identified OIRA changes to the rules. Using the same basic methodology as in our 2003 report on the effect of OIRA’s review process, we used a variety of information sources (such as agency and OIRA docket materials and interviews with agency officials) to classify the most significant level of changes attributed to OIRA’s review. For each of the 12 rules, we classified the level of OIRA changes into one of the following three categories: Significant changes. Rules in which the most significant changes affected the scope, impact, or estimated costs and benefits of the rules as originally submitted to OIRA. Usually, these changes were made to the regulatory language that would appear in the Code of Federal Regulations and is legally binding. For example, in an FDA rule on dietary supplements, OIRA suggested a change in the regulatory language to reduce the number of years required to save a reserve sample. However, revisions to a cost- benefit analysis could also be significant because they affect the reported impact of a rule. Other material changes. Rules in which the most significant changes resulted in the addition or deletion of material in the explanatory preamble section of the rule. For example, in a DOT rule on event data recorders, OIRA suggested a change in the explanatory language clarifying that crash investigators and researchers are able to download data from the recorders. Minor or no OIRA changes. Rules in which there were no changes made to the draft rule, the most significant changes attributed to OIRA’s suggestions resulted in editorial or other minor revisions, or any changes in the rule prior to publication were not at the suggestion of OIRA. As shown in table 2, we determined that OIRA suggested “significant” changes for 4 of the 12 case-study rules submitted for Executive Order 12866 reviews, “other material” changes for 4 of the rules, and “minor” changes for 2 of the rules. OIRA did not suggest any changes for the remaining 2 rules. Of the 4 rules that had significant changes, 2 were rules developed and promulgated by EPA and 1 each by DOT and FDA. In addition 3 of the 4 rules with significant changes were major rules. (See the case studies in app. II for further details on the changes made to rules reviewed by OIRA.) Documentation of OIRA Review Was Sometimes Incomplete Executive Order 12866 requires both agencies and OIRA to disclose to the public certain information about OIRA’s regulatory reviews. After the regulatory action has been published in the Federal Register or otherwise issued to the public, an agency is required to 1. make available to the public the information provided to OIRA in 2. 3. accordance with the executive order; identify for the public, in a complete, clear, and simple manner, the substantive changes between the draft submitted to OIRA and the action subsequently announced; and identify for the public those changes in the regulatory action that were made at the suggestion or recommendation of OIRA. The order requires OIRA to maintain a publicly available log that includes the following information pertinent to rules under OIRA’s review: 1. the status of rules submitted for OIRA review, 2. a notation of all written communications received by OIRA from 3. persons not employed by the executive branch, and information about oral communications between OIRA and persons not employed by the executive branch. After the rule has been published or otherwise issued to the public (or the agency has announced its decision to not publish or issue the rule), OIRA is required to make available to the public all documents exchanged between OIRA and the agency during the review by OIRA. An OIRA official also pointed out that OIRA does not monitor, on a rule-by-rule basis, compliance by rulemaking agencies with their disclosure obligations under Executive Order 12866. The case-study agencies generally met the executive order’s requirements to disclose materials they provided to OIRA and substantive changes made during OIRA’s review. In contrast to our study in 2003, all the agencies we reviewed for this report had documentation of OIRA’s reviews. However, the documentation could be improved for greater transparency. Executive Order 12866 does not specify how agencies should document the changes made to draft rules after their submission to OIRA, nor is there any governmentwide guidance that directs agencies on how to do so. Nonetheless, some of the documentation on OIRA’s changes was very clear, but in other cases additional efforts were required to interpret the information. As we found in 2003, the agencies did not always clearly attribute changes made at the suggestion of OIRA, and agencies’ interpretations were not necessarily consistent regarding what constitutes a substantive change that should be documented to comply with the executive order transparency requirements. For example, different departments within one agency had varied interpretations, with one office only considering those changes made to regulatory text as substantive. Table 3 provides summary information about the type and nature of agencies’ documentation to address Executive Order 12866 transparency requirements for the 12 case-study rules reviewed by OIRA. (See app. III for examples of agencies’ OIRA review documentation.) As we also found in 2003, agencies sometimes included more information about OIRA’s review than required, and we found such information useful to more clearly explain what had occurred. For example, EPA included copies of messages from OIRA outlining suggested changes to draft rules under review. EPA also included a memo to the docket summarizing the subjects discussed with OIRA at a meeting about one of the case-study rules. In the case of one rule that was unchanged by OIRA, FDA’s docket included both an annotated copy of the rule returned to FDA and a memo to the file noting that there were no changes to the draft rule. In general, compared to our review in 2003, we found it more difficult to find agencies’ documentation of OIRA’s regulatory reviews, primarily because of difficulties using the search capabilities in the centralized electronic Federal Docket Management System under www.regulations.gov. Using the advanced docket search function as instructed by the Web site user information to first find the rule, we searched by the rule title, the RIN, and the rule Docket ID independently and as they appeared in the published version of the final rule in the Federal Register. Using those criteria, we were able to find all four of the DOT case-study rules submitted for OIRA review but only one each of the 4 FDA rules and 4 EPA rules. We chose to use paper dockets when the opportunity presented itself for the FDA case study rules. Agencies’ labeling practices also sometimes made it difficult to find the relevant documentation about OIRA’s reviews. Out of 12 dockets, we were able to identify 5 of the 10 changed rules and 1 of the 2 unchanged rules by searching the docket Web pages for “12866,” “OIRA,” and “OMB.” In addition, while the agencies’ published rules stated that the rules had been reviewed by OIRA under Executive Order 12866, most of the rules did not identify whether substantive changes had been made during the OIRA review period (and therefore documentation of the changes should be included in the rulemaking docket). Although there is no requirement for agencies to do so, including such additional information would be consistent with how agencies discuss other rulemaking requirements in published rules and potentially help readers navigate the docket. Such information, for example, would more clearly have identified which of our case-study rules’ dockets should include documentation of OIRA review changes. In response to the disclosure requirements placed on OIRA by Executive Order 12866, OIRA’s meeting logs indicated that parties not employed by the executive branch initiated meetings with OIRA regarding 7 of the 12 case-study rules, but we do not know what influence meeting discussions had on OIRA recommendations because there is no requirement for OIRA to disclose the substance of the meetings. OIRA logged a total of 10 meetings, but 2 of the meetings each concerned 2 rules. Three of these meetings occurred before formal submission of the draft rule for OIRA review. There were meetings on all 4 EPA case-study rules. As we found during our review in 2003, most of the nonfederal parties appeared to be representatives of regulated entities. In all but 2 of the meetings, the agency issuing the regulation was represented. OIRA Implemented Only One of Eight Prior GAO Recommendations to Improve Transparency of the Regulatory Review Process In our 2003 report on the OMB/OIRA regulatory review process, we made eight recommendations to the Director of OMB to improve the transparency of the process. OMB implemented our recommendation to improve the clarity of OIRA’s meeting log to better identify participants in OMB meetings with external parties on rules under review by disclosing the affiliations of participants. In some cases, the log also identified the clients represented. However, OIRA did not agree with the seven remaining recommendations in the 2003 report and did not implement those recommendations. We recommended that OIRA should do the following: 1. Define the transparency requirements applicable to the agencies and OIRA in Executive Order 12866 in such a way that they include not only the formal review period, but also the informal review period when OIRA says it can have its most important impact on agencies’ rules. 2. Change OIRA’s database to clearly differentiate within the “consistent with change” outcome category which rules were substantively changed at OIRA’s suggestion or recommendation and which were changed in other ways and for other reasons. 3. Reexamine OIRA’s current policy that only documents exchanged by OIRA branch chiefs and above need to be disclosed because most of the documents that are exchanged while rules are under review at OIRA are exchanged between agency staff and OIRA desk officers. 4. Establish procedures whereby either OIRA or the agencies disclose the reason why rules are withdrawn from OIRA review. 5. Define the types of “substantive” changes during the OIRA review process that agencies should disclose as including not only changes made to the regulatory text but also other, noneditorial changes that could ultimately affect the rules’ application (for example, explanations supporting the choice of one alternative over another and solicitations of comments on the estimated benefits and costs of regulatory options). 6. Instruct agencies to put information about changes made in a rule after submission for OIRA’s review and those made at OIRA’s suggestion or recommendation in the agencies’ public rulemaking dockets, and to do so within a reasonable period after the rules have been published. 7. Encourage agencies to use “best practice” methods of documentation that clearly describe those changes. We discussed the status of these open recommendations with OIRA representatives annually since 2003 and also as part of this review, and they confirmed that OIRA had not subsequently implemented any of the seven remaining recommendations. As discussed above, our current review indicated that there are still opportunities to improve transparency for some of these topics, such as better identification of when agencies made substantive changes to their rules as a result of the OIRA review process, attributing the sources of changes made during the review period, and clarifying the definition of substantive changes. Other issues covered by our 2003 recommendations—such as OIRA informal reviews and disclosing why rules are withdrawn from OIRA review—did not arise during this review, but this may reflect the nature of the specific rules we reviewed and our more limited sample of case studies. Conclusions Federal regulatory agencies issue many rules to ensure public health and safety, protect the environment, and facilitate the effective functioning of financial markets, among other goals. Because these rules can affect so many aspects of citizens’ lives, it is crucial that rules be carefully developed and considered and that rulemaking procedures be effective and transparent. To further such goals, Congresses and Presidents have placed many procedural and analytical requirements on the rulemaking process over the years. While we and others have reported on agencies’ implementation of individual requirements, there has been little analysis of the cumulative effects these requirements have on agencies’ rulemaking processes. Our study of broadly applicable requirements illustrated the difficulties of evaluating the effects of regulatory requirements on the rulemaking process with limited data. To the extent that agencies had information for selected rules, it showed considerable variation in the time required for issuing final rules that could not be explained by the number of regulatory requirements, few of which were triggered. Moreover, the complexity or magnitude of a major rule also did not explain all or most variation, as some case-study rules were not major and took nearly as long or longer to be published. This raises the question of what factors can account for the variations in rule development. While our findings point to better use of existing estimates and plans to identify opportunities to improve the rulemaking process, agencies also recognized more can be done and, in some cases, have taken steps to answer this question. We found that early in their rulemaking processes each agency identified the key milestones it needed to accomplish to produce a final rule. During the course of our review, only DOT provided data that it routinely tracked these milestones and reported internally and externally on the status of milestones for development of the agency’s significant rules. However, in comments on our draft report, EPA and FDA subsequently provided some documentation and data on their tracking and reporting of milestones. Although there is no right time for how long a rulemaking should take, monitoring actual versus estimated performance enables agency managers to identify steps in the rulemaking process that account for substantial development time and provides information necessary to further evaluate whether the time was well spent. Although not all factors are within an agency’s control, some are. Agency officials we spoke with identified several potential benefits to monitoring and reporting, including better scheduling and increased internal and external accountability. This is also consistent with the internal control standard that an agency must have relevant, reliable information relating to external as well as internal events. Additionally, officials in three of the four agencies we audited said they would benefit from a better understanding of how staff resources are used, even though agencies’ efforts so far have produced limited results. Information on only one element in the rulemaking process—length of time—cannot answer whether an agency is managing well. However, such information can provide insights into the process, such as when it contributed to our efforts to determine the relative burden of various regulatory requirements. Our review of major and case-study rules indicated that the majority of the rules triggered only a few of the rulemaking requirements. The requirements that rules most often triggered are among the longest standing and broadly applicable—the PRA, the RFA, and centralized OIRA review under Executive Order 12866. The PRA and the RFA generally apply to all agencies and rules, and officials from each of the agencies where we conducted case studies cited those requirements as ones that consistently added time to the rulemaking process and required investments of agency resources. Similarly, under Executive Order 12866, we observed that a relatively small portion of rules submitted to OIRA for review have economic consequences significant enough to trigger the most rigorous analytical requirements, so any burden of compliance with those requirements is not very widespread. The majority of rules submitted for OIRA review (around 85 percent historically) are significant for reasons other than their economic impact. The case-study agencies generally met the executive order’s requirements to disclose materials they provided for OIRA’s review and substantive changes made during OIRA’s review. For those case-study agencies and rules subject to OIRA review, some agency practices were more effective than others in communicating review results. Transparency problems that we identified in the past persist, such as incomplete attribution of changes and inconsistent definitions of substantive changes among and within agencies. Also, unlike when addressing other regulatory requirements— where agencies typically note in the published rule whether the rule triggered the requirements—the agencies did not clearly identify in their final rules when substantive changes had been made during the OIRA review period. OIRA, as the agency responsible for providing oversight and guidance to agencies on regulatory matters, is the principal entity in a position to ensure consistent compliance across agencies if the administration retains transparency requirements regarding regulatory review. Recommendations for Executive Action We are making six recommendations to improve the monitoring and evaluation of rules development and the transparency of the review process. To be consistent with internal controls for information in managing agency operations, we recommend that for significant rules the Commissioner of FDA and the Chairman of SEC routinely track major milestones in regulatory development and report internally and externally when major milestones are reached against established targets. The Administrator of EPA, the Commissioner of FDA, and the Chairman of SEC should each also evaluate actual performance versus the targeted milestones and when they are different determine why. If the current administration retains Executive Order 12866, or establishes similar transparency requirements, we recommend that the Director of OMB, through the Administrator of OIRA, take the following four actions to more consistently implement the order’s requirement to provide information to the public “in a complete, clear, and simple manner”: define in guidance what types of changes made as a result of the OIRA review process are substantive and need to be publicly identified, instruct agencies to clearly attribute those changes “made at the suggestion or recommendation of OIRA,” direct agencies to clearly state in final rules whether they made substantive changes as a result of the OIRA reviews, and standardize how agencies label documentation of these changes in public rulemaking dockets. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Health and Human Services (HHS), DOT, EPA, SEC, and OMB. We received written comments from HHS/FDA, EPA, SEC, and OMB which are summarized below and reprinted in appendices IV through VII. However, because EPA and FDA provided new information as part of agency comments, we did not analyze the information provided and conduct follow-up discussions with agency officials prior to publication of this report. We note that we had asked for this information during our review and the agencies did not provide at that time. Also, they did not provide this information at the conclusion of our review either in response to our statements of facts to the agencies or at our exit conference with the agencies. DOT provided only technical comments. With regard to the two recommendations directed to the rulemaking agencies, SEC stated that the Commission is committed to evaluating and improving all of it processes and will consider our recommendations as part of that process. With regard to our recommendation that for significant rules agencies routinely track major milestones in regulatory development and report internally and externally when major milestones are reached against established targets, FDA commented that the scope of this recommendation should be more narrow and flexible. Specifically, FDA commented that: (1) the scope of tracking should be limited to only economically significant rules because FDA cannot predict with certainty what rules OMB will consider otherwise significant until close to rule clearance, (2) alternative tracking approaches should be permitted since FDA has the FRDTS that tracks the progress of all its Federal Register documents through the latter stages of the agency’s development and clearance process, and (3) routine reporting on when major milestones are met should only be internal because reporting externally may mislead stakeholders and prompt inquiries that draw resources away from the agency’s ability to complete regulations. While we agree that some flexibility is necessary, we disagree about narrowing the scope of our recommendation. For example, regarding FDA’s proposal to narrow the scope to only economically significant rules, in this report we note that about 85 percent of all significant rules are not economically significant, so such a limitation would drop the bulk of the agency’s regulatory activity. Further, with regard to FDA’s point that it is uncertain what rules OMB will deem significant, in this report we stated that under Executive Order 12866 the agency, not OMB, has the primary responsibility to first identify which rules are significant. Therefore, FDA should be aware of many if not most of the rules that are deemed significant in its inventory. For those rules that OMB deems significant only at the latter stages of rulemaking, we recognize that tracking might have to begin at that stage. Regarding FDA’s second point, we did not recommend one particular system and recognize that the information in FRDTS provides tracking data on milestones. However, we note that these data are limited to the latter stages of the rulemaking process. Because our review showed that the earlier developmental stages of a rule could be a significant portion of time spent in regulatory development, there is value in also tracking milestones in the earlier stages. Regarding FDA’s third point, it is still important to report some information externally as well as internally to improve the transparency and accountability of the agency’s rulemaking process. Further, we believe that FDA’s concern about the impact that reporting some information externally could have on agency resources is overstated. None of the agencies we met with during this review identified responding to public inquiries as a major factor affecting resources and timeliness. Also, there is nothing that precludes agencies from providing reasons for delays when externally reporting this information to reduce the volume of public inquiries. Therefore, we kept this recommendation addressed to FDA. With regard to our second recommendation that agencies should also evaluate actual performance versus the targeted milestones, FDA stated that the agency is already engaged in quality improvement efforts for its rulemaking process. Specifically, FDA said that its Policy Council has quarterly meetings with the agency components and the agency periodically reviews its rulemaking processes to see if changes are needed. Also, FDA noted that 12 such reviews have been completed since 1981, and identified two recent pilot projects resulting from these reviews. However, although FDA said the agency conducted general evaluations and provided some examples, FDA did not provide information showing that the agency had specifically evaluated the issue highlighted in our recommendation. To the extent that FDA has not, we would still recommend that they specifically evaluate the reasons for any discrepancies between projected and actual milestones for their significant rulemakings. Therefore, we kept this recommendation addressed to FDA. We revised the body of the report where appropriate in response to the additional information FDA provided in its comments. Regarding our recommendation that agencies routinely track major milestones and report internally and externally when major milestones are reached, EPA clarified that the agency currently tracks key milestones associated with the rulemaking process and reports this information internally and externally. Specifically, EPA cited RAPIDS, an internal tracking system that monitors cross-agency involvement and senior management reviews. EPA also cited three primary sources for external reporting, specifically its use of Action Initiation Lists, the Semiannual Regulatory Agenda, and a quarterly report which they subsequently identified as EPAStat. Based on the new information and subsequent documentation that we requested from EPA in response to the agency’s comments, we concur that EPA has a tracking system and internal and external reporting mechanisms that appear to address our recommendation. Therefore, we removed this recommendation to EPA. For example, EPA’s RAPIDS tracks information on numerous milestones on all phases of the rulemaking process. Further, RAPIDS tracks information on rules that are both economically significant and significant. Similarly, with regard to internal and external reporting, EPA cited three main sources the agency uses for external reporting of milestones. We modified the body of the report to incorporate the new information. We note, however, that we had requested this information during our review and, because the information was not provided at the time of our review, we did not have the opportunity to discuss how the information is used, whether it is useful, and, most importantly, if it could be used to respond to our report objectives. We did not audit the new information provided because of the approaching deadline for OMB to provide recommendations to the President for a new executive order on federal agency regulatory review. While not specifically addressing our second recommendation that agencies should also evaluate actual performance versus the targeted milestones, EPA comments indicated that agency executives and managers routinely meet to review milestones on key regulations and review program performance. Specifically, EPA noted that actions that are completed on time or early are used by the agency as examples of best practices and actions that are off-track are identified early and corrective steps are taken to expedite their completion. Because we were unaware of this system or its use at the time of our review, we could not determine whether EPA specifically evaluated discrepancies between projected and actual milestones to determine reasons why and took corrective actions. No evidence was provided to draw a conclusion. Therefore, we kept this recommendation addressed to EPA. With regard to our four recommendations to OMB to more consistently implement the Executive Order 12866 requirement that agencies provide information to the public in a complete, clear, and simple manner, OMB stated that these recommendations have merit and warrant further consideration. In particular, OMB stated that it will give full consideration to the report and its recommendations as the agency finalizes its recommendations to the President for a new Executive Order on regulatory review. OMB also said that the report will remind rulemaking agencies of their responsibility to identify in a complete, clear, and simple manner the substantive changes between the draft rule submitted to OIRA for review and the action subsequently announced, and to identify those changes in the regulatory action that were made at the suggestion or recommendation of OIRA. We also received technical comments and clarifications which we incorporated into this report, where appropriate. EPA provided a substantive technical comment regarding our classifications of the level of OIRA changes for three of the case study rules. In light of the clarifying comments EPA provided, we revised our classification of the Ethylene Oxide Emissions rule from “significant changes” to “other material changes.” However we did not reclassify the other two EPA rules because the changes suggested would not be consistent with the methodology and criteria we used in this and prior reviews of the OIRA regulatory review process. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretary of Health and Human Services, the Secretary of Transportation, the Administrator of EPA, the Chairman of SEC, and the Director of OMB. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or fantoned@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Appendix I: Summary of Common Regulatory Requirements In this appendix, we provide information on commonly applicable regulatory requirements established by statutes and executive orders. We included those requirements identified by 10 or more of the rules we reviewed for this report or that were relevant to our case-study rules. We list the requirements within each major section (statutory requirements and executive orders) in chronological order. For each requirement, the following paragraphs summarize the general purpose, applicability and requirements imposed by the initiatives that were relevant to the rules we examined for this report. Statutory Requirements Administrative Procedure Act The Administrative Procedure Act (APA) was enacted in 1946 and established the basic framework of administrative law governing federal agency action, including rulemaking. Section 553 of Title 5, United States Code, governs “notice-and-comment” rulemaking, also referred to as “informal” or “APA rulemaking.” Section 553 generally requires (1) publication of a notice of proposed rulemaking, (2) opportunity for public participation in the rulemaking by submission of written comments, and (3) publication of a final rule and accompanying statement of basis and purpose not less than 30 days before the rule’s effective date. Congresses and Presidents have taken a number of actions to refine and reform this regulatory process since the APA was enacted. National Environmental Policy Act of 1969 The National Environmental Policy Act (NEPA) requires agencies to consider the potential impact on the environment of federal agency action, including regulations. NEPA directs all agencies of the federal government to include in proposals for “major Federal actions significantly affecting the quality of the human environment” a detailed environmental impact statement addressing certain listed subjects and applying substantive criteria set forth in the Act. Federal Advisory Committee Act The Federal Advisory Committee Act (FACA) regulates the formation and operation of advisory committees by federal agencies. Advisory committees, normally comprising of experts in the regulatory field involved, representatives of affected interest groups, and representatives of federal and state agencies, generally advise agencies on the content of rulemaking or on issues while the rulemaking is in progress. Some statutes require the agencies to use advisory committees, and others authorize but do not require their use. Endangered Species Act of 1973 The Endangered Species Act seeks to protect species of animals against threats to their continuing existence caused by man. Under section 7 of the Act, each federal agency, “in consultation with and with the assistance of the Secretary” of the Interior shall ensure that any regulation issued by that agency not jeopardize the continued existence of any endangered or threatened species. 16 U.S.C. § 1536(a)(2). Regulatory Flexibility Act The Regulatory Flexibility Act (RFA) was enacted in response to concerns about the effect that federal regulations can have on small entities. The RFA requires independent and other regulatory agencies to assess the impact of their rules on “small entities,” defined as including small businesses, small governmental jurisdictions, and certain small not-for- profit organizations. Under the RFA, an agency must prepare an initial regulatory flexibility analysis at the time proposed rules are issued, unless the head of the agency certifies that the proposed rule would not have a “significant economic impact upon a substantial number of small entities.” 5 U.S.C. § 605(b). The analysis must include a consideration of regulatory alternatives that accomplish the stated objectives of the proposed rule and that minimize any significant impact on such entities. However, the RFA only requires consideration of such alternatives and an explanation of why alternatives were rejected; the Act does not mandate any particular outcome in rulemaking. After the comment period on the proposed rule is closed, the agency must either certify a lack of impact, or prepare a final regulatory flexibility analysis, which among other things, responds to issues raised by public comments on the initial regulatory flexibility analysis. The agencies must make the final analysis available to the public and publish the analysis or a summary of it in the Federal Register. The Act also requires agencies to ensure that small entities have an opportunity to participate in the rulemaking process and requires the Chief Counsel of the Small Business Administration’s Office of Advocacy to monitor agencies’ compliance. The RFA applies only to rules for which an agency publishes a notice of proposed rulemaking (or promulgates a final interpretative rule involving the internal revenue laws of the United States), and it does not apply to ratemaking. Paperwork Reduction Act of 1980 The Paperwork Reduction Act (PRA) requires agencies to justify any collection of information from the public to minimize the paperwork burden they impose and to maximize the practical utility of the information collected. The Act applies to independent and other regulatory agencies. Under the PRA, agencies are required to submit all proposed information collections to the Office of Information and Regulatory Affairs (OIRA) in the Office of Management and Budget (OMB). Information collections generally cover information obtained from more than ten sources. In their submissions, agencies must establish the need and intended use of the information, estimate the burden that the collection will impose on respondents, and show that the collection is the least burdensome way to gather the information. Generally, the public must be given a chance to comment on proposed collections of information. 44 U.S.C. § 3506(c), 5 C.F.R. § 1320.11. At the final rulemaking stage, no additional public notice and opportunity for comment is required, although OMB may direct the agency to publish a notice in the Federal Register notifying the public of OMB review. Negotiated Rulemaking Act of 1990 The Negotiated Rulemaking of 1990 (NRA) established a statutory framework for agency use of negotiated rulemaking to formulate proposed regulations. The NRA supplements the rulemaking provisions of the APA, clarifying the authority of federal agencies to conduct negotiated rulemaking. Generally, in a negotiated rulemaking, representatives of the agency and the various affected interest groups get together and negotiate the text of a proposed rule. Unfunded Mandates Reform Act of 1995 The Unfunded Mandates Reform Act (UMRA) was enacted to address concerns about federal statutes and regulations that require nonfederal parties to expend resources to achieve legislative goals without being provided funding to cover the costs. UMRA generates information about the nature and size of potential federal mandates but does not preclude the implementation of such mandates. UMRA applies to proposed federal mandates in both legislation and regulations, but it does not apply to rules published by independent regulatory agencies. With regard to the regulatory process, UMRA generally requires federal agencies to prepare a written statement containing a “qualitative and quantitative assessment of the anticipated costs and benefits” for any rule that includes a federal mandate that may result in the expenditure of $100 million or more in any 1 year by state, local, and tribal governments in the aggregate, or by the private sector. For such rules, agencies are to identify and consider a reasonable number of regulatory alternatives and from those select the least costly, most cost-effective, or least burdensome alternative that achieves the objectives of the rule (or explain why that alternative was not selected). UMRA also includes a consultation requirement; agencies must develop a process to permit elected officers of state, local, and tribal governments (or their designees) to provide input in the development of regulatory proposals containing significant intergovernmental mandates. UMRA applies only to rules for which an agency publishes a notice of proposed rulemaking. National Technology Transfer and Advancement Act of 1995 The National Technology Transfer and Advancement Act (NTTAA) directs federal agencies to use voluntary consensus standards in their regulatory activities unless the agency provides Congress, through OMB, with an explanation of why using these standards would be inconsistent with applicable law or otherwise impracticable. Voluntary consensus standards are technical standards (e.g., specifications of materials, performance, design, or operation; test methods; sampling procedures; and related management systems practices) that are developed or adopted by voluntary consensus standards bodies. Small Business Regulatory Enforcement Fairness Act Congress amended the RFA in 1996 by enacting the Small Business Regulatory Enforcement Fairness Act (SBREFA). SBREFA included judicial review of compliance with the RFA. SBREFA requires agencies to develop one or more compliance guides for each final rule or group of related final rules for which the agency is required to prepare a regulatory flexibility analysis. SBREFA also requires the Environmental Protection Agency and the Occupational Safety and Health Administration to convene advocacy review panels before publishing an initial regulatory flexibility analysis. Congressional Review Act The Congressional Review Act (CRA) was enacted as part of SBREFA in 1996 to better ensure that Congress has an opportunity to review, and possibly reject, rules before they become effective. CRA established expedited procedures by which members of Congress may disapprove agencies’ rules by introducing a resolution of disapproval that, if adopted by both Houses of Congress and signed by the President, can nullify an agency’s rule. CRA applies to rules issued by independent and other regulatory agencies. CRA requires agencies to file final rules with both Congress and GAO before the rules can become effective. GAO’s role under CRA is to provide Congress with a report on each major rule (for example, rules with a $100 million impact on the economy) including GAO’s assessment of the issuing agency’s compliance with the procedural steps required by various acts and executive orders governing the rulemaking process. Information Quality Act In 2000, the Information Quality Act (IQA) was added as an amendment to the PRA. IQA applies to the same agencies that are subject to the PRA; the Act applies to independent and other regulatory agencies. The IQA requires every agency to issue guidelines, with OMB oversight, to ensure and maximize the quality, objectivity, utility, and integrity of information disseminated by the agency. Agencies must also establish administrative mechanisms allowing affected persons to seek and obtain correction of information maintained and disseminated by the agency. On December 16, 2004, OMB issued the Information Quality Bulletin for Peer Review under the IQA and other authority. The Bulletin establishes minimum standards for when peer review is required for scientific information, including stricter minimum standards for the peer review of “highly influential” scientific assessments. The Bulletin also establishes the types of peer review that should be considered by agencies in different circumstances. The Bulletin applies to independent and other regulatory agencies. Agencies must conduct any required peer reviews early enough to allow the agency to plan its regulatory approaches. “When an information product is a critical component of rule-making, it is important to obtain peer review before the agency announces its regulatory options so that any technical corrections can be made before the agency becomes invested in a specific approach or the positions of interest groups have hardened.” 70 Fed. Reg. 2668. The result of a peer review is a report, which agencies must consider making available to potential commenters in the rulemaking process. “If an agency relies on influential scientific information or a highly influential scientific assessment . . . the agency shall include in the administrative record for that action a certification that explains how the agency has complied with the requirements of this Bulletin.” 70 Fed. Reg. 2673. E-Government Act of 2002 The E-Government Act of 2002 was intended to enhance the management and promotion of electronic government services and processes. With regard to the regulatory process, the Act requires agencies, to the extent practicable, to accept public comments on proposed rules by electronic means and to ensure that publicly accessible federal Web sites contain electronic dockets for their proposed rules, including all comments submitted on the rules and other relevant materials. Executive Orders In addition to congressional regulatory reform initiatives enacted in statutes, presidential initiatives have a key role in the regulatory process. In fact, centralized review of agencies’ regulations within the Executive Office of the President has been part of the rulemaking process for more than 30 years. Executive Order 12372 – Intergovernmental Review of Federal Programs This executive order generally requires federal agencies to consult with state and local elected officials on regulations involving Federal financial assistance or Federal development that would have an impact on State and local finances. Executive Order 12630 -- Governmental Actions and Interference with Constitutionally Protected Property Rights This executive order requires agencies to limit interference with private property rights protected under the Fifth Amendment to the Constitution. Agencies must include an analysis of the impact of proposed regulations on property rights in its submissions to OMB. Executive Order 12866 – Regulatory Planning and Review The formal process by which OIRA currently reviews agencies’ proposed rules and final rules is essentially unchanged since Executive Order 12866 was issued in 1993. Under Executive Order 12866, OIRA reviews significant proposed and final rules from agencies, other than independent regulatory agencies, before they are published in the Federal Register. The executive order states, among other things, that agencies should assess all costs and benefits of available regulatory alternatives, including both quantitative and qualitative measures. It also provides that agencies should generally select regulatory approaches that maximize net benefits (unless a statute requires another approach). Among other principles, the executive order encourages agencies to tailor regulations to impose the least burden on society needed to achieve the regulatory objectives. The executive order also established agency and OIRA responsibilities in the review of regulations, including transparency requirements. OIRA provides guidance to federal agencies on implementing the requirements of the executive order, such as guidance on preparing economic analyses required for significant rules in OMB Circular No. A-4. OMB Circular No. A-4 On September 17, 2003, OMB issued OMB Circular No. A-4, Regulatory Analysis, which is a guide for preparing the economic analysis of significant regulatory action called for by the Executive Order. OMB designed the guidelines to help agencies conduct “good regulatory analyses” and to standardize the way that benefits and costs of regulations are measured and reported. The guidelines define a good regulatory analysis as one that includes a statement of the need for the proposed regulation, an assessment of alternatives, and an evaluation of the benefits and costs of the alternatives. The guidelines state that the motivation of the evaluation is to learn if the benefits of an action are likely to justify the costs, or discover which of the possible alternatives would be the most cost-effective. According to OIRA, this Circular contains several significant changes from previous OMB guidance, including (1) more emphasis on cost-effectiveness analysis, (2) formal probability analysis for rules with more than a billion-dollar impact on the economy and (3) more systematic evaluation of qualitative as well as quantified benefits and costs. Executive Order 12898 – Federal Actions to Address Environmental Justice in Minority Populations and Low- Income Populations This executive order requires each agency to develop an “environmental justice strategy... that identifies and addresses disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low-income populations.” Each agency must identify rules that should be revised to meet the objectives of the executive order. Executive Order 12988 – Civil Justice Reform This executive order requires agencies to draft regulations in a manner that will reduce needless litigation by ensuring the clarity of regulatory language regarding legal rights and obligations. For example, the order requires agencies to draft regulations that provide a clear legal standard for affected conduct rather than a general standard, and promote simplification and burden reduction. Executive Order 13045 – Protection of Children from Environmental Health Risks and Safety Risks This executive order requires that agencies issuing “economically significant” rules that also concern an environmental health risk or safety risk that an agency has reason to believe may disproportionately affect children must submit to OIRA an evaluation of the environmental health or safety effects of the planned regulation on children. Agencies must also include an explanation of why the planned regulation is preferable to other potentially effective and reasonably feasible alternatives considered by the agencies. Executive Order 13132 – Federalism This executive order requires agencies to prepare a federalism summary impact statement for actions that have federalism implications. Specifically, it provides that “no agency shall promulgate any regulation that has federalism implications, that imposes substantial direct compliance costs on state and local governments,” unless the agency (1) has consulted with state and local officials early in the process, (2) submitted to OMB copies of any written communications from such officials, and (3) published in the preamble of the rule “a federalism summary impact statement” describing the consultations, “a summary of the nature of concerns and the agency’s position supporting the need to issue the regulations, and a statement of the extent to which the concerns of State and local officials have been met.” Executive Order 13175 – Consultation and Coordination with Indian Tribal Governments This executive order provides that “no agency shall promulgate any regulation that has tribal implications” unless the agency (1) has consulted with tribal officials early in the process, (2) submitted to OMB copies of any written communications from such officials, and (3) published in the preamble of the rule “a tribal summary impact statement” describing the consultations, “a summary of the nature of their concerns and the agency’s position supporting the need to issue the regulation, and a statement of the extent to which the concerns of tribal officials have been met.” On issues relating to tribal self-government, tribal trust resources, or Indian tribal treaty and other rights, each agency should explore and, where appropriate, use consensual mechanisms for developing regulations, including negotiated rulemaking. Executive Order 13211 – Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use This executive order requires agencies to prepare and submit to OMB a “Statement of Energy Effects” for significant energy actions. The statement must cover the regulation’s “adverse effects on energy supply, distribution, or use (including a shortfall in supply, price increases, and increased use of foreign supplies)” and reasonable alternatives and their effects. The “Statement of Energy Effects” must be published (or summarized) in the related proposed and final rule. Appendix II: Case Studies of 16 Selected Rules This appendix provides case studies on 16 final rules published between January 2006 and May 2008 that we reviewed for this report. At the beginning of each case study the official rule title as published in the Federal Register is followed by a short title that we used to identify the rule in the body of our report. The body of each case study includes identifying information, a brief summary or synopsis of the rule, a discussion of the regulatory requirements addressed in the final rule, a summary of the changes to the rule resulting from reviews of the draft rule by the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA) (if applicable), and a timeline of important events in the course of the rulemaking. Identifying Information. Identifies the responsible federal agency, and other unique identifying information, such as the Regulation Identifier Number (RIN), the citation in the Federal Register of the final rule, and the docket number in www.regulations.gov. Rule Synopsis. Provides summary information about the substance and effects of the rule, such as the intent or purpose of the rule, a brief discussion of the rule’s origin, rulemaking history, or regulatory authority upon which the rule was created. Regulatory Requirements Addressed in the Final Rule. Identifies generally- applicable rulemaking requirements discussed by the agency in the final rule as published in the Federal Register and either what additional actions were taken to comply with the triggered requirement or why the requirement was not triggered. For economically significant rules that triggered the requirement under Executive Order 12866, we identified if the agency prepared a cost-benefit analysis, and if applicable, we also summarize the actions the agency took to address recent changes to regulatory requirements made by OMB since 2003. Changes Resulting from OIRA Review. Describes changes to rules that were recommended by OIRA during OIRA’s review of the rulemaking under Executive Order 12866. Timeline. Identifies key dates such as the dates OIRA received and completed its reviews, the publication dates of the proposed and final rules in the Federal Register, the date the public comment period ended, and other dates mentioned or tracked by agency officials. National Air Tour Safety Standards (Air Tour Safety) Identifying Information Rule Synopsis The rule sets safety standards governing commercial air tours. The objective of the rule is to provide a higher and uniform level of safety for all commercial air tours. The rule includes provisions requiring that passengers be briefed on safety procedures, such as opening exits, exiting the aircraft, and using life preservers. It requires that passengers in helicopters and planes operating over open water wear life preservers, and that helicopters operating over open water be equipped to float. It also gives relief from drug and alcohol testing for four air tour charity events per year, and increases the required prior flight time for pilots in those events from 200 to 500 hours. Air tours frequently take place in heavy air traffic and in areas geographically limited in size with dangerous natural obstructions. Better oversight of the industry was recommended by the National Transportation Safety Board, reports of the Department of Transportation (DOT) Inspector General, and GAO. Regulatory Requirements Addressed in the Final Rule FAA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: FAA concluded that the rule qualified for a categorical exclusion from NEPA and that the rule does not involve any significant impacts to the human environment. PRA: The rule included new information collections for which FAA completed and submitted an Information Collection Request to OIRA for approval. RFA: FAA determined that the rule will have a significant economic impact on a substantial number of small entities and prepared a final regulatory flexibility analysis. Trade Agreements Act: FAA assessed the potential effect of the rule and determined that it would have only a domestic impact and therefore no effect on any international trade-sensitive activity. UMRA: FAA determined that the rule would not result in any 1-year expenditure by state, local, and tribal governments, in the aggregate, or by the private sector that would meet or exceed the relevant threshold of $128.1 million. Executive Order 12866 (Regulatory Planning and Review): FAA identified the rule as a significant regulatory action as defined by the executive order because it raised novel policy issues. FAA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 13132 (Federalism): FAA determined that the rule did not have a substantial direct effect on the states, on the relationship between the national government and the states, or on the balance of power and responsibilities among the various levels of government. Therefore, FAA concluded that the rule did not have federalism implications. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): FAA determined that the rule was not a significant energy action under the executive order because it was not likely to have a significant adverse effect on the supply, distribution, or use of energy. Changes Resulting from OIRA Review There was a significant change to the regulatory text. The rule language was changed to clarify that four charitable or non-profit events, with no event lasting more than 3 consecutive days, was the limit, rather than four of each in one calendar year. There were also a number of other material changes to the rule. OIRA requested the addition of a chart or matrix explaining the changes made by this rule. OIRA requested the inclusion of an explanation of the policy of a four-event limit on charity and nonprofit event flights. OIRA questioned the source used for support of a requirement that pilots flying charity, nonprofit or community event flights have 500 hours total flying time. FAA added an explanation to the rule. OIRA requested clarification of differences between “Operations Specification” and a “Letter of Authorization.” FAA rewrote the rule section to clarify the differences. OIRA requested FAA more fully explain the effect of the rule on operations at the Grand Canyon. FAA added three sentences to the preamble. OIRA changed the rule at both the proposed and final rule stages. Timeline January 2, 2002: Preliminary team concurrence on proposed rule. January 2, 2002: Economic evaluation of proposed rule. November 2, 2002: Final team concurrence on proposed rule. December 9, 2002: Director’s concurrence on proposed rule. January 8, 2003: First internal level concurrence on proposed rule. January 30, 2003: Second FAA internal level concurrence on proposed rule. February 5, 2003: Draft proposed rule transmitted to the Office of the Secretary of Transportation (OST). July 8, 2003: OST approved draft proposed rule. July 10, 2003: OIRA received draft proposed rule. October 7, 2003: OIRA completed review of proposed. October 9, 2003: Issuance of proposed rule. October 22, 2003: Proposed rule published in the Federal Register. October 21, 2005: Preliminary team concurrence on final rule. November 21, 2005: Principal’s briefing on final rule. January 19, 2006: Economic evaluation of final rule. March 27, 2006: Final team concurrence on final rule. April 5, 2006: Director’s concurrence on final rule. April 6, 2006: First internal level concurrence on final rule. April 25, 2006: Second FAA internal level concurrence on final rule. April 27, 2006: Transmittal to the OST of final rule. August 16, 2006: OST approval of draft final rule. August 16, 2006: Transmittal to OIRA of final rule. September 21, 2006: Meeting between OIRA and nonfederal parties regarding the rule. November 7, 2006: OIRA approval of final rule. December 22, 2006: Issuance of final rule. February 13, 2007: Final rule published in the Federal Register. Human Space Flight Requirements for Crew and Space Flight Participants (Human Space Flight Requirements) Identifying Information Rule Synopsis The rule establishes requirements for human space flight, with the intent of providing an acceptable level of safety to the general public and ensuring individuals on board are aware of the risks associated with launch and reentry. The rule sets training and medical standards for the crew, and requires the operator to inform each space flight participant in writing of the risks of launch and reentry. Security requirements in the rule prevent participants from carrying certain items on board, and safety requirements in the rule require participants be trained before flight on how to respond to an emergency situation. Regulatory Requirements Addressed in the Final Rule FAA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: FAA concluded that the rule qualified for a categorical exclusion from the requirement for preparation of an environmental assessment or environmental impact statement. PRA: The rule included new information collections for which FAA completed and submitted an Information Collection Request to OIRA for approval. RFA: The FAA Administrator certified that the rule will not have a significant economic impact on a substantial number of small entities. Trade Agreements Act: FAA assessed the potential effect of the rule and determined that it will impose the same costs on domestic and international entities and thus has a neutral trade impact. UMRA: FAA determined that the rule would not result in any 1-year expenditure by state, local and tribal governments, in the aggregate, or by the private sector that would meet or exceed the relevant threshold of $120.7 million. Executive Order 12866 (Regulatory Planning and Review): FAA identified the rule as a significant regulatory action as defined by the executive order because it raised novel policy issues. FAA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 13132 (Federalism): FAA determined that the rule did not have a substantial direct effect on the states, on the relationship between the national government and the states, or the balance of power and responsibilities among the various levels of government. Therefore, FAA concluded that the rule did not have federalism implications. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): FAA determined that the rule was not a significant energy action under the executive order because it was not likely to have a significant adverse effect on the supply, distribution, or use of energy. Changes Resulting from OIRA Review There were no substantive changes to this rule resulting from OIRA review. Timeline March 22, 2005: Preliminary team concurrence on proposed rule. April 25, 2005: Economic evaluation of proposed rule. April 29, 2005: Principal’s briefing on proposed rule. May 31, 2005: Final team concurrence on proposed rule. June 17, 2005: Director’s concurrence on proposed rule. July 12, 2005: Legal concurrence on proposed rule. July 24, 2005: Office of the FAA Administrator concurrence on proposed rule. July 28, 2005: Proposed rule transmitted to OST. September 23, 2005: OST approval of proposed rule. September 28, 2005: OIRA received draft proposed rule. December 22, 2005: OIRA completed review of draft proposed rule without change. December 22, 2005: Issuance of proposed rule. December 29, 2005: Proposed rule published in the Federal Register. March 24, 2006: Preliminary team concurrence on final rule. April 6, 2006: Economic evaluation of final rule. April 27, 2006: Final team concurrence on final rule. May 5, 2006: Director’s concurrence on final rule. May 26, 2006: Final rule transmitted to OST. August 29, 2006: OST approval of final rule. August 29 2006: OIRA received draft final rule. November 9, 2006: OIRA completed review of draft final rule without change. December 1, 2006: Issuance of final rule. December 15, 2006: Final rule published in the Federal Register. Light Trucks, Average Fuel Economy; Model Years 2008-2011 (Fuel Economy-Light Trucks) Identifying Information Rule Synopsis The rule reforms the structure of the corporate average fuel economy (CAFE) program for light trucks and establishes higher CAFE standards for model years 2008 through 2011. While this rule was proposed in 2005 and finalized in 2006, it was preceded by a series of rules establishing fuel economy standards for light trucks (i.e., non-passenger automobiles) that date back to the 1970s. The rule was mandated by the Energy Policy Conservation Act of 1975. Regulatory Requirements Addressed in the Final Rule NHTSA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: NHTSA prepared an environmental assessment for the rule and concluded that the rule will not have a significant effect on the quality of the human environment. NTTAA: NHTSA consulted with voluntary consensus standards bodies and incorporated industry standards and definitions, such as an industry standard on light truck footprint. PRA: The rule included new information collections for which NHTSA completed and submitted an Information Collection Request to OIRA for approval. RFA: NHTSA’s Deputy Administrator certified that the rule did not have a significant economic impact on a substantial number of small entities. UMRA: NHTSA determined that the rule would not result in any 1-year expenditure by state, local and tribal governments, in the aggregate, of more than $115 million, but would result in the expenditure of that magnitude by the private sector. NHTSA concluded that it was required by statute to set standards at the maximum feasible level achievable by manufacturers, and thus could not consider regulatory alternatives. Executive Order 12866 (Regulatory Planning and Review): NHTSA identified the rule as a significant regulatory action as defined by the executive order because of its economic significance. Therefore, NHTSA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 12988 (Civil Justice Reform): NHTSA determined that the rule does not have any retroactive effect. Executive Order 13045 (Protection of Children from Environmental Health Risks and Safety Risks): NHTSA determined that the rule does not have a disproportionate effect on children. Executive Order 13132 (Federalism): NHTSA stated that the statutory authorization for the rule has a broad preemption provision, and therefore, the agency was required to establish these standards by law. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): NHTSA determined that the rule would not have any adverse energy effects. OMB Peer Review Bulletin: NHTSA convened a panel of three external experts to review the model used in the rule. The peer review reports and the agency’s response to reviews were included in the rulemaking docket. Recent Analytic Requirements Addressed NHTSA’s regulatory impact analysis addressed two of the four analytical changes in the OMB economic guidelines that GAO reviewed for this study. For example, the agency assessed uncertainty using a formal probability analysis and discounted potential future benefits and costs using discount rates of 7 percent and 3 percent, as directed by the OMB guidelines. Agency officials said that conducting the probability analysis was time-consuming, requiring one full-time analyst about 6 weeks to complete. In addition, the officials said, the probability analysis was conducted after the agency had selected the preferred regulatory alternative, and as a result, the analysis was not used for decision-making purposes. The agency did not evaluate qualitative and quantitative benefits and costs because it monetized all the key impacts and the agency’s analysis did not emphasize cost-effectiveness. Changes Resulting from OIRA Review NHTSA did not consider substantive any of the changes made to either the draft proposed or draft final rules during the formal review period, and thus did not docket a record of changes made during the OIRA review period. Timeline 1974: DOT/EPA reported to Congress on motor vehicle fuel economy standards. 1975: Energy Policy Conservation Act of 1975 enacted. December 29, 2003: NHTSA issued an Advanced Notice of Proposed Rulemaking under a different RIN (2127-AJ17) soliciting comments on the structure of the CAFE program and an intent to reform the light truck CAFE program. July 26, 2005: OIRA received the draft proposed rule. August 22, 2005: OIRA completed review of the draft proposed rule with change. August 30, 2005: Proposed rule published in the Federal Register. March 14, 2006: OIRA staff met with outside parties to discuss this rule. March 23, 2006: OIRA received the draft final rule. March 28, 2006: OIRA completed review of the draft final rule consistent with change. April 6, 2006: Final rule published in the Federal Register. Event Data Recorders Identifying Information Agency: Department of Transportation, National Highway Traffic Safety Rule classification: Other Significant RIN: 2127-AI72 Federal Register citation: 71 Fed. Reg. 50,998 Regulations.gov docket number: NHTSA-2004-18029 (proposed rule); NHTSA-2006-25666 (final rule) Rule Synopsis The rule establishes standards for the auto industry practice of installing event data recorders (EDR) in passenger cars and other light vehicles. The intent of the rule is to standardize data obtained through EDRs so that data may be most effective and ensure that EDR infrastructure develops to provide a foundation for automatic crash notification. The rule requires a minimum set of specified data elements, standardizes data format, helps ensure crash survivability of an EDR and its data, and ensures commercial availability of tools necessary to enable crash investigators to retrieve data from the EDR. The rule also requires vehicle manufacturers to describe the function and capability of an EDR in the owner’s manual of any vehicle equipped with an EDR to ensure public awareness. NHTSA promulgated the rule following years of study by NHTSA and the National Transportation Safety Board, and after having received three citizen petitions. Regulatory Requirements Addressed in the Final Rule NHTSA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: NHTSA determined that the rule will not have any significant impact on the quality of the human environment. NTTAA: NHTSA adopted voluntary consensus standards where practicable. PRA: The rule did not contain any new information collection requests. RFA: NHTSA’s Administrator certified that the rule would not have a significant economic impact on a substantial number of small entities. UMRA: NHTSA determined that the rule would not result in the expenditure by state, local, or tribal governments, in the aggregate, or by the private sector, of more than the annual threshold of $118 million. Executive Order 12866 (Regulatory Planning and Review): NHTSA identified the rule as a significant regulatory action as defined by the executive order. NHTSA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 12988 (Civil Justice Reform): NHTSA stated that the rule specified its preemptive effect in clear language. Executive Order 13045 (Protection of Children from Environmental Health Risks and Safety Risks): NHTSA concluded that because the rule is not economically significant and does not involve health and safety risks that disproportionately affect children, no further analysis was necessary under this executive order. Executive Order 13132 (Federalism): NHTSA concluded that general principles of preemption law would operate so as to displace any conflicting state law or regulation. Changes Resulting from OIRA Review As documented by the agency in memorandums included in the public docket, OIRA suggested changes to both the proposed and final rule that NHTSA officials incorporated into the published versions of the rules. NHTSA incorporated OIRA suggested language to the proposed rule preamble related to ensuring that crash investigators and researchers are able to obtain the capability of downloading data from the EDR. Similarly, NHTSA changed, at OIRA’s suggestion, the owner’s manual statement in the text of the proposed rule to include a sentence advising owners that an EDR does not store or collect personal information. NHTSA incorporated additional OIRA suggestions to the final rule, adding to or clarifying the policy discussion in the preamble, including adding clarifying language to the federalism discussion. NHTSA incorporated OIRA changes to the owner’s manual statement in the final rule text, explaining that parties with special equipment, including law enforcement officials, can access information in an EDR if they have access to the vehicle, and that they may group EDR data with personal information regularly collected in the course of a criminal investigation. Timeline 1991: NHTSA began to examine EDRs as part of the Special Crash Investigations Program. November 9, 1998: NHTSA denied petition for rulemaking on EDRs. June 2, 1999: NHTSA denied second petition for rulemaking on EDRs. 2001: NHTSA received a third petition for rulemaking on EDRs. October 11, 2002: NHTSA published request for comment on the future role of EDRs in motor vehicles. March 9, 2004: OIRA received the draft proposed rule. June 3, 2004: OIRA completed review of draft proposed rule consistent with change. December 2003: Preliminary regulatory evaluation completed. June 14, 2004: Proposed rule published in the Federal Register. April 11, 2006: OIRA received draft final rule. July 2006: Final regulatory evaluation completed. August 17, 2006: OIRA completed review of draft final rule consistent with change. August 28, 2006: Final rule published in the Federal Register. Ethylene Oxide Emissions Standards for Sterilization Facilities (Ethylene Oxide Emissions) Identifying Information Rule Synopsis The rule resulted from the periodic evaluation of the emission standards for ethylene oxide emissions from sterilization facilities. In the proposed rule, EPA decided not to impose more stringent emission standards, determining that additional controls at existing sources would achieve, at best, minimal emission reduction at a very high cost. The Clean Air Act requires EPA to assess the risk posed by ethylene oxide emissions and set more stringent standards as it deems necessary within 8 years of initially setting standards, taking into account developments in practices, processes, and control technologies. Regulatory Requirements Addressed in the Final Rule EPA discussed the following generally-applicable statutes and executive orders in the final rule: CRA: EPA filed the rule with the Congress and the Comptroller General. NTTAA: The rule does not involve any technical standards. PRA: The rule does not impose any new information collection requests. RFA: EPA determined that the rule would not have a significant economic impact on a substantial number of small entities. UMRA: EPA determined that the rule would not result in expenditures of $100 million or more to state, local, and tribal governments, in the aggregate, or to the private sector in any one year. Executive Order 12866 (Regulatory Planning and Review): OMB deemed the rule a significant regulatory action as defined by the executive order. Therefore, EPA submitted the rule to OMB for review. Executive Order 13045 (Protection of Children from Environmental Health Risks and Safety Risks): EPA “did not have reason to believe” that the environmental health or safety risks addressed by this rule presented a disproportionate risk to children. Executive Order 13132 (Federalism): EPA determined that the final rule did not have a substantial direct effects on the states, on the national government and the states, or on the distribution of power and responsibilities among the various levels of government. EPA concluded that the rule did not have federalism implications. Executive Order 13175 (Consultation and Coordination with Indian Tribal Governments): EPA determined that the rule would not have substantial direct effects on tribal governments, on the relationship between the federal government and Indian tribes, or on the distribution of power and responsibilities between the federal government and Indian tribes. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): EPA determined that the rule would not likely have a significant adverse effect on the supply, distribution, or use of energy. Changes Resulting from OIRA Review An OIRA-initiated change included generally available control technologies or management practices (GACT) in addition to maximum achievable control technologies or management practices (MACT) for area sources of ethylene oxide. Both are standards EPA can use for controlling emissions of ethylene oxide for area sources. Major sources are required to use MACT to control emissions. In the proposed rule OIRA specified that the CAA provides that “EPA is not required to conduct any review under section 112(f) of the CAA or promulgate any emissions limitations under that subsection for any source listed pursuant to section 112(c)(3), for which EPA has issued GACT standards. Thus, although EPA has discretion to conduct a residual risk review under section 112(f) for area sources for which it has established GACT, it is not required to do so.” OIRA specified that EPA’s residual risk review is required for each CAA section 112(d) source category, except area source categories for which EPA issued a GACT standard. While addressing Executive Order 13045, “Protection of Children from Environmental Health and Safety Risks,” EPA stated that “The public is invited to submit or identify peer reviewed studies and data, of which the agency may not be aware, that assessed the results of early life exposure to ethylene oxide commercial sterilization facility emissions.” During OIRA review this sentence was deleted. EPA had written that if cancer risks to individuals exposed to emissions from a regulated source are found above a threshold specified in the CAA, “we must promulgate residual risk standards for the source category (or subcategory) which provide an ample margin of safety.” OIRA rewrote the sentence to read “we must decide whether additional reductions are necessary to provide an ample margin of safety.” Similarly, EPA wrote that in the same circumstance, “we must also adopt more stringent standards to prevent an adverse environmental effect.” The quotation changed during OIRA review to read “we must determine whether more stringent standards are necessary to prevent an adverse environmental effect.” Timeline December 6, 1994: Original emission standards rule published. 1997: EPA began developing the methodology for conducting a residual risk assessment. Late 1990s: EPA convened groups to consider changing the technologies used for ethylene emission control. February 13, 2002: EPA assigned a Start Action Number. August 3, 2005: OIRA received the draft proposed rule. September 27, 2005: OIRA completed review of the draft proposed rule. October 24, 2005: The proposed rule is published in the Federal Register. March 22, 2006: OIRA met with nonfederal parties regarding the rule. March 23, 2006: OIRA received the draft final rule. March 31, 2006: OIRA completed review of the draft final rule. April 7, 2006: The final rule was published in the Federal Register. National Emission Standards for Organic Hazardous Air Pollutants from the Synthetic Organic Chemical Manufacturing Industry (Hazardous Air Pollutants) Identifying Information Rule Synopsis The rule established that the original National Emission Standards for Organic Hazardous Air Pollutants for the synthetic organic chemical manufacturing industry set in 1994 would mostly remain unchanged. Although the rule does not impose further controls on the synthetic organic chemical manufacturing industry, it does amend certain aspects of the existing regulations. The CAA directs EPA to evaluate the remaining risk presented by major sources of emissions of hazardous air pollutants 8 years after promulgation of technology-based standards to determine if the standards provide an ample margin of safety to protect public health. The Act also directs EPA to review all standards regulating hazardous air pollutants every 8 years and revise them as necessary, taking into account developments in practices, processes, and control technologies. Regulatory Requirements Addressed in the Final Rule EPA discussed the following generally-applicable statutes and executive orders in the final rule: CRA: EPA filed the final rule with Congress and the Comptroller General. NTTAA: The rule does not involve any voluntary consensus standards. PRA: The rule does not impose any new information collection requests. RFA: EPA determined that the rule would not have a significant economic impact on a substantial number of small entities. UMRA: EPA determined that the rule does not contain a federal mandate that may result in expenditures of $100 million or more on state, local, and tribal governments, in the aggregate, or the private sector in any one year. Executive Order 12866 (Regulatory Planning and Review): OMB deemed the rule a significant regulatory action as defined by the executive order because it raised novel legal and policy issues. Therefore, EPA submitted the rule to OMB for review. Executive Order 12898 (Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations): One of EPA’s environmental justice priorities is to reduce exposure to air toxics. In the proposed rule, EPA requested comment on the implications of this priority since some regulated facilities are located near minority and low-income populations. EPA received one comment regarding this environmental justice concern that it addressed in the final rule. Executive Order 13045 (Protection of Children from Environmental Health and Safety Risks): EPA “did not have reason to believe” that the environmental health or safety risks addressed by the rule presented a disproportionate risk to children. Executive Order 13132 (Federalism): EPA determined that the rule did not have a substantial direct effect on the states, on the relationship between the national government and the states, or on the balance of power and responsibilities among the various levels of government. Therefore, EPA concluded that the rule did not have federalism implications. Executive Order 13175 (Consultation and Coordination with Indian Tribal Governments): EPA determined that the rule does not have tribal implications. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): EPA determined that the rule would not likely have a significant adverse effect on the supply, distribution, or use of energy. Changes Resulting from OIRA Review OIRA made several substantive changes to the explanatory text of the preamble. OIRA cut a section specifying periodic determinations of pertinent technical factors. It deleted a statement that assumptions made in the study design did not fully capture the relatively higher exposure seen by children. It requested additional and more balanced discussion of the rationale for selecting “no control,” and requested that EPA specify a sample size and expand its discussion on risks reduced by emission controls. It deleted a section describing some costs of the regulation and ways for EPA to avoid those costs. OIRA review resulted in EPA deleting a sentence asserting that it was required by the CAA under a particular set of circumstances to promulgate residual risk standards. Timeline February 14, 2002: EPA assigned a Start Action Number. March 16, 2006: OIRA received draft proposed rule. March 22, 2006: OIRA met with nonfederal parties regarding the rule. May 31, 2006: OIRA completed review of proposed rule. June 14, 2006: Proposed rule published in the Federal Register. December 11, 2006: OIRA received draft final rule. December 14, 2006: OIRA completed review of draft final rule. December 21, 2006: Final rule was published in the Federal Register. National Primary Drinking Water Regulations: Stage 2 Disinfectants and Disinfection Byproducts Rule (Disinfection Byproducts 2) Identifying Information Rule Synopsis The rule is intended to help public water systems deliver safe water with the benefits of disinfection but with fewer risks from disinfection byproducts. Certain disinfectants used to treat drinking water are known to create byproducts posing potential reproductive, developmental, and cancer risks to humans. Authorized by the Safe Drinking Water Act Amendments of 1996 (SDWA), the rule is one of a series of rules, including the Surface Water Treatment 2 rule, intended to improve the quality of drinking water provided by public water systems throughout the United States. EPA’s first rulemaking on disinfection byproducts was promulgated in 1979. Because of the complex and far-reaching implications of the rule, as well as the relationship between the Disinfection Byproducts 2 rule and the Surface Water Treatment 2 rule, EPA convened a FACA panel to help develop the policies in the rule. Regulatory Requirements Addressed in the Final Rule EPA discussed the following generally-applicable statutes and executive orders in the final rule: NTTAA: EPA adopted voluntary consensus standards for monitoring the levels of disinfection byproducts. PRA: The rule contained new information collection requirements for which EPA completed and submitted an Information Collection Request to OIRA for approval. RFA: EPA certified that the rule would not have a significant economic impact on a substantial number of small entities. EPA conducted a SBREFA advocacy review panel. UMRA: EPA determined that the rule may contain a mandate resulting in annual expenditures of more than $100 million for state, local, and tribal governments, or the private sector. EPA prepared an UMRA analysis, which included a consideration of the regulatory alternatives. Executive Order 12866 (Regulatory Planning and Review): EPA identified the rule as a significant regulatory action as defined by the executive order. Therefore, EPA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 12898 (Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations): EPA consulted with minority and low-income stakeholders. EPA determined that since the rule applies uniformly to all communities, the health protections provided are equal across all minority and income groups served by systems regulated by the rule. Executive Order 13045 (Protection of Children from Environmental Health Risks and Safety Risks): EPA concluded that “it has reason to believe that the environmental health or safety risk . . . addressed by this may have a disproportionate effect on children. EPA believes that the will result in greater risk reduction for children than for the general population.” Executive Order 13132 (Federalism): EPA determined that the rule did not have federalism implications because it will not have substantial direct effects on the states, on the relationship between the states and federal government, or the distribution of power and responsibilities among various levels of government. Executive Order 13175 (Consultation and Coordination with Indian Tribal Governments): EPA concluded that the final rule may have tribal implications because it may impose substantial direct compliance costs on tribal governments and the federal government will not provide the funds necessary to pay those costs. A detailed estimate of the tribal impact was included in the rule. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): EPA determined that the rule is not likely to have a significant adverse effect on the supply, distribution, or use of energy. Recent Analytic Requirements Addressed EPA’s regulatory impact analysis addressed the four analytical changes in the OMB economic guidelines that we reviewed for this study. For example, the agency analyzed cost-effectiveness, discounted potential benefits and costs using discount rates of 7 percent and 3 percent, evaluated potential qualitative and quantitative benefits and costs, and conducted a probability analysis to assess the uncertainty associated with some potential impacts. Agency officials said that they assessed the cost- effectiveness of the regulatory alternatives in the rule because the OMB guidelines require it but that the three other analyses were conducted because the agency had already adopted them as best practices. The officials estimated that the cost-effectiveness analysis required from 1 to 2 months to complete, partly because the agency had not yet developed guidance for analyzing the cost-effectiveness of health-related rules. In addition, agency officials said that the cost-effectiveness analysis was only of limited use for selecting a final regulatory alternative. Changes Resulting from OIRA Review The copy of the draft final rule in the docket shows changes throughout to both the preamble and the rule text itself. Some appear to be strictly editorial, and others appear to have substantive effect. One significant change to the rule text itself is in the section on compliance monitoring requirements. The requirement to repeat monitoring is changed to apply only when more than eight monitoring locations are required, rather than four required monitoring locations. Additional requirements for repeat monitoring have been removed. Changes to the preamble include the addition of a description of variances and exemptions in place of a statement that the rule would be updated “upon completion of affordability discussions with OMB.” EPA’s Executive Order 12866 compliance form for the rule, docketed with the OMB review of the draft of the final rule, describes the OMB changes as not substantive. Timeline November 29, 1979: The Total Trihalomethanes Rule, the first regulation of disinfection byproducts, was published. Fall 1992: EPA convened an advisory committee to address the issue of disinfection and disinfectant byproducts and pathogen control issues; this led to the first Disinfection Byproducts rule. Spring 1993: A Cryptosporidium outbreak in Milwaukee, Wisconsin, sickened over 400,000 people, roughly 50 percent of users of the municipal drinking water system. 1996: The SDWA required EPA to establish new standards for treatment of drinking water and the byproducts of the water treatment process. 1997: EPA convened a federal advisory committee to finalize SDWA rulemakings, including the Disinfection Byproducts 2 rule and Interim Enhanced Surface Water Treatment rule. December 16, 1998: The first Disinfection Byproducts rule was published in the Federal Register. March 1999 to July 2000: EPA reconvened the federal advisory committee to provide technical input on additional SDWA rulemakings, including the Disinfection Byproducts 2 rule and the Surface Water Treatment 2 rule. August 6, 1999: EPA assigned a Start Action Number for the rulemaking. Late 1999: EPA initiated the pre-panel stages of a SBREFA advocacy review panel. June 23, 2000: Advocacy review panel completed. September 2000: Federal advisory committee members signed Agreement in Principle stating consensus of the group. December 29, 2000: Federal advisory committee Agreement in Principle was published in the Federal Register. January 16, 2001: Revisions to the first Disinfection Byproducts rulemaking were published in the Federal Register. October 17, 2001: EPA published pre-proposal draft of Disinfection Byproducts 2 rule preamble and regulatory language on the agency Web site for public comment on whether the draft was consistent with federal advisory committee recommendations. July 2003: The agency completed the regulatory impact analysis for proposed rule. August 18, 2003: The proposed rule was published in the Federal Register. February 2005: EPA received notice of intent to sue. April 14, 2005: OMB staff met with outside parties to discuss several rules by the EPA Office of Water, including this rule. August 26, 2005: OMB received the draft final rule. November 2005: EPA entered into settlement agreement to complete the rule by December 2005. November 23, 2005: OMB completed review of draft final rule with change. December 2005: EPA completed the regulatory impact analysis for final rule. December 15, 2005: EPA Administrator signed the final rule. January 4, 2006: The final rule was published in the Federal Register. National Primary Drinking Water Regulations: Long Term 2 Enhanced Surface Water Treatment Rule (Surface Water Treatment 2) Identifying Information Rule Synopsis The rule is intended to protect public health against Cryptosporidium and other microbial pathogens in drinking water. Cryptosporidium is highly resistant to chemical disinfectants and can cause acute illness and death for people with weakened immune systems. Authorized by SDWA, the rule was one of a series of rules, including the Disinfection Byproducts 2 rule, intended to improve the quality of drinking water supplied by public water systems throughout the United States. Because of the complex and far- reaching implications of the rule, as well as the relationship between the Disinfection Byproducts 2 rule and the rule, EPA convened a FACA panel to help develop the policies in the rule. Regulatory Requirements Addressed in the Final Rule EPA discussed the following generally-applicable statutes and executive orders in the final rule: NTTAA: EPA adopted voluntary standards for monitoring the levels of one pathogen. PRA: The rule contained new information collection requirements for which EPA completed and submitted an Information Collection Request to OIRA for approval. RFA: EPA certified that the rule would not have a significant economic impact on a substantial number of small entities. EPA conducted a SBREFA advocacy review panel. UMRA: EPA determined that the rule may contain a mandate resulting in annual expenditures of more than $100 million for state, local, and tribal governments, in the aggregate, or the private sector. EPA prepared an UMRA analysis, which included a consideration of the regulatory alternatives. Executive Order 12866 (Regulatory Planning and Review): EPA identified the rule as a significant regulatory action as defined in the executive order because of its economic significance. Therefore, EPA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 12898 (Federal Actions to Address Environmental Justice in Minority Populations or Low-Income Populations): EPA determined that since the rule applies uniformly to all communities, the health protections provided are equal across all minority and income groups served by systems regulated by the rule. Executive Order 13045 (Protection of Children from Environmental Health Risks and Safety Risks): EPA determined that the rule was economically significant and that the environmental risks addressed by the rule may have a disproportionate effect on children. Executive Order 13132 (Federalism): EPA concluded that the rule may have federalism implications because it may impose substantial direct costs on state or local governments, and the federal government will not provide the funds to pay those costs. Executive Order 13175 (Consultation and Coordination with Indian Tribal Governments): EPA concluded that the rule may have tribal implications because it may impose substantial direct compliance costs on tribal governments, and the federal government will not provide the funds necessary to pay those costs. Executive Order 13211 (Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use): EPA determined that the rule is not likely to have a significant adverse effect on the supply, distribution, or use of energy. Recent Analytical Requirement Addressed EPA’s regulatory impact analysis addressed the four analytical changes in the OMB economic guidelines that we reviewed for this study. The agency analyzed cost-effectiveness, discounted potential benefits and costs using discount rates of 7 percent and 3 percent, evaluated potential qualitative and quantitative benefits and costs, and conducted a probability analysis to assess the uncertainty associated with some potential impacts. Agency officials said that they assessed the cost-effectiveness of the regulatory alternatives in the rule because the OMB guidelines require it but that the three other analyses were conducted because the agency had already adopted them as best practices. The officials estimated that the cost- effectiveness analysis required from 1 to 2 months to complete, partly because the agency had not yet developed guidance for analyzing the cost- effectiveness of health-related rules. In addition, agency officials said that the cost-effectiveness analysis was only of limited use for selecting a final regulatory alternative. Changes Resulting from OIRA Review According to OMB’s public database, OMB received the draft final rule on March 31, 2005, and review was complete on June 22, 2005. The rule was subsequently published “consistent with change.” EPA’s Executive Order 12866 compliance form for the rule, docketed with the OMB review of the draft of the final rule, describes the OMB changes as not substantive. The reviewed copy of the draft final rule shows changes throughout the draft rule to both the preamble and the rule text itself. Some appear to be strictly editorial, and others appear to have substantive effect. Two significant changes to the rule text itself are addition of notification of violation requirements for public water systems in section 141.211 of the rule and changes to section 141.703 that provide additional circumstances under which data can be grandfathered under state approval. Changes to the preamble include indication that regulated systems may assume state approval of monitoring locations if explicit state approval is not forthcoming. Timeline Fall 1992: EPA convened an advisory committee to address the issue of disinfection and disinfectant byproducts and pathogen control issues; this led to the first Disinfection Byproducts rule. Spring 1993: A Cryptosporidium outbreak in Milwaukee, Wisconsin, sickened 400,000 people, roughly 50 percent of users of the municipal drinking water system. August 9, 1999: EPA assigned Start Action Number for the rulemaking. 1996: The SDWA authorized EPA to establish new treatment standards for drinking water and byproducts of the water treatment process. 1997: EPA convened an issue-specific federal advisory committee to develop SDWA rulemakings, including Stage 1 DBP Rule and Interim Surface Water Treatment Rule. December 16, 1998: Interim Enhanced Surface Water Treatment Rule was published in the Federal Register. March 1999 to September 2000: EPA reconvened the federal advisory committee to provide technical input on additional SDWA rulemakings, including the Long Term 2 Rule. Late 1999: EPA initiated the pre-panel stages of a SBREFA advocacy review panel. June 23, 2000: Advocacy review panel completed. September 2000: Federal advisory committee members signed Agreement in Principle stating consensus of the group. December 29, 2000: Federal advisory committee Agreement in Principle was published in the Federal Register. January 16, 2001: Revisions to Interim Enhanced Surface Water Treatment rulemaking were published in the Federal Register. October 17, 2001: EPA published pre-proposal draft of Long Term 2 Rule preamble and regulatory language on the agency Web site for public comment. January 14, 2002: The Long Term 1 Rule was published in the Federal Register. December 18, 2002: OIRA received the draft proposed rule. March 18, 2003: OIRA completed review of the draft proposed rule with change. June 2003: The agency completed the regulatory impact analysis for proposed rule. August 11, 2003: The proposed rule was published in the Federal Register. February 2005: EPA received notice of intent to sue. March 31, 2005: OMB received the draft final rule. April 14, 2005: OMB staff met with outside parties to discuss several rules by the EPA Office of Water, including this rule. June 22, 2005: OMB completed review of draft final rule with change. November 2005: EPA entered into settlement agreement to complete the rule by December 2005. December 2005: The agency completed the regulatory impact analysis for final rule. December 15, 2005: Final rule was signed. January 5, 2006: The final rule published in the Federal Register. Requirements on Content and Format of Labeling for Human Prescription Drug and Biological Products (Physician Labeling) Identifying Information Rule Synopsis The rule revises the requirements for the format and content of labeling for human prescription drugs and biological products, the information physicians use to learn about and prescribe these products. The rule governs physician drug labeling which takes the form of package inserts. Package inserts are used as the basis for a large uniform prescription drug manual called The Physician’s Desk Reference. FDA stated that the intent of the rule is to enhance the safe and effective use of prescription drug products and to reduce the number of adverse reactions resulting from medication errors caused by misunderstood or incorrectly applied drug information. Specifically, revisions require the labeling of new and recently approved products to include highlights of prescribing information and a table of contents, exclude less important and include more important content, meet new minimum graphical requirements, be accompanied by all applicable patient labeling approved by FDA, and clarify certain prescribing requirements. Regulatory Requirements Addressed in the Final Rule FDA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: FDA determined that the rule does not have a significant effect on the human environment. PRA: The rule included new information collections for which FDA completed and submitted an Information Collection Request to OIRA for approval. RFA: FDA believes that the final rule would not have a significant impact on most small entities in this industry, but it is possible that a few small firms may be significantly affected by the final rule. FDA included a final regulatory flexibility analysis in the final rule. UMRA: FDA determined that the rule would not result in any 1-year expenditure by state, local and tribal governments, in the aggregate, or by the private sector that would meet or exceed the relevant threshold of $115 million. Executive Order 12866 (Regulatory Planning and Review): FDA identified the rule as a significant regulatory action as defined by the executive order. FDA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 12988 (Civil Justice Reform): FDA determined that the rule does not have any retroactive effect. Executive Order 13132 (Federalism): FDA stated that certain state-level product liability claims would conflict with federal law or frustrate the purpose of federal regulation. FDA described six categories of product liability claims that the agency believed would be preempted by FDA’s regulation of prescription drug labeling. Changes Resulting from OIRA Review FDA deleted at OIRA’s suggestion an estimate of the number of people who would submit applications for new drugs and a table detailing the estimated reporting burden for those subject to FDA’s information collection request. FDA at OIRA’s suggestion also increased the number of affected pharmaceutical firms that could be considered small. We found in FDA’s docket, documentation of OIRA’s review for the final rule but not for the proposed rule. We assume none was required for the proposed rule as, according to OIRA’s www.reginfo.gov, it was not changed by OIRA review. Timeline Before 1992: CDER received feedback from physicians in the field that prescription drug labeling required revision. 1992: CDER convened the first focus group on the issue. August 2, 2000: OIRA received draft proposed rule. December 14, 2000: OIRA completed review of the draft proposed rule without change. December 22, 2000: The proposed rule was published in the Federal Register. August 2003: The draft final rule approved by CDER. October 2003: The rule was sent to the Office of Chief Counsel, which performed a federalism analysis. November 2004: The rule received FDA approval and was sent to the Department of Health and Human Services and OMB simultaneously. January 10, 2005: OIRA received the draft final rule. April 1, 2005: FDA withdrew the rule from OIRA review at OIRA’s request. April 8, 2005: FDA resubmitted the rule for OIRA review. January 17, 2006: OIRA completed its review of the final rule with change. January 24, 2006: The final rule was published in the Federal Register. Use of Ozone Depleting Substances; Removal of Essential Use Designations (Ozone Depleting Substances) Identifying Information Rule Synopsis The Clean Air Act required that FDA, in consultation with EPA, determine whether an FDA-regulated product that released ozone-depleting substances was essential. The rule removes the “essential use” designations granted previously by FDA for seven products emitting ozone-depleting substances from pressurized containers. As none of the seven products were being marketed in the United States, the rule removed unnecessary essential use designations. The products were granted the designation by a previous FDA rule, which also stated that if essential use products were no longer marketed in the United States, the designation could be withdrawn. Regulatory Requirements Addressed in the Final Rule FDA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: FDA conducted an environmental assessment and considered potential impacts. FDA concluded that an environmental impact statement was not required. PRA: The rule does not impose any new information collection requests. RFA: FDA certified that the rule would not have a significant economic impact on a substantial number of small entities. UMRA: FDA did not expect the rule to result in any 1-year expenditure that would meet or exceed the relevant threshold of $118 million. Executive Order 12866 (Regulatory Planning and Review): FDA believed that the rule was not a significant regulatory action as defined by this executive order. However, OMB requested the rule for review because of its international implications. Executive Order 13132 (Federalism): FDA determined that the rule did not contain policies that have federalism implications. Changes Resulting from OIRA Review Under Executive Order 12866, FDA did not consider the rule significant regulatory action as defined by the executive order. However, FDA believed that OMB would want to review the rule because of the rule’s implications to the Montreal Protocol (Treaty) that included agency obligations to EPA as stated in the treaty, and also amendments to the Clean Air Act. OIRA did not suggest any changes to the rule during the formal OMB review period. Timeline Early 2006: CDER began drafting the direct-to-final rule. May 2006: CDER approval process began. July 2006: Economic analysis completed. August 2006: Rule approved by CDER’s Office of Regulatory Policy. September 2006: Rule approved by CDER Director. October 2006: Rule approved by Chief Counsel and management at FDA. October 23, 2006: OIRA received draft final rule. November 28, 2006: OIRA completed review of draft final rule with no change. December 2006: Direct-to-final rule published in the Federal Register. Current Good Manufacturing Practice in Manufacturing, Packaging, Labeling, or Holding Operations for Dietary Supplements (Dietary Supplements) Identifying Information Rule Synopsis The rule establishes the minimum current good manufacturing practice for manufacturing, packaging, labeling, and holding dietary supplements. FDA was authorized by the Dietary Supplement Health and Education Act of 1994 to prescribe by regulation good manufacturing practices for dietary supplements. FDA published an Advance Notice of Proposed Rulemaking on February 6, 1997 in the Federal Register. 62 Fed. Reg. 5700. FDA published a proposed rule in the Federal Register on March 13, 2003. Regulatory Requirements Addressed in the Final Rule FDA discussed the following statutory and regulatory requirements in the final rule: NEPA: FDA determined that the final rule did not trigger the requirements of NEPA because the action is of a type that does not individually or cumulatively have a significant effect on the human environment. PRA: The rule included new information collections for which FDA completed and submitted an Information Collection Request to OIRA for approval. RFA: FDA determined that the rule will have a significant economic impact on a substantial number of small entities and prepared a final regulatory flexibility analysis. UMRA: FDA determined that the rule may contain a mandate resulting in annual expenditures of more than $122 million for state, local, and tribal governments, or the private sector. FDA prepared an UMRA analysis, which included a consideration of the rule’s effects on future costs. Executive Order 12866 (Regulatory Planning and Review): FDA identified the rule as a significant regulatory action as defined by the executive order because of its economic significance. Therefore, FDA conducted an economic analysis and submitted the rule to OIRA for review. Executive Order 13132 (Federalism): FDA determined that the rule did not have a substantial direct effect on the states, on the relationship between the national government and the states, or on the balance of power and responsibilities among the various levels of government. FDA concluded that the rule did not have federalism implications. Recent Analytic Requirements Addressed FDA’s regulatory impact analysis for the final rule included the four analytical changes in OMB Circular No. A-4 that we reviewed for this study. For example, the agency analyzed cost-effectiveness and evaluated some qualitative impacts, discounted future benefits and costs using discount rates of 3 percent and 7 percent, and conducted a probability analysis to assess the uncertainty associated with some potential impacts. FDA officials said that they used the two discount rates because of Circular No. A-4 but added that they have always followed OMB guidelines on discount rates. The officials could not recall whether the cost- effectiveness analysis was conducted specifically because of Circular No. A-4, but indicated that it is currently an agency best practice. The officials added that systematically evaluating qualitative impacts and using probability analysis to analyze uncertainty are also agency best practices. For example, the officials said that they analyzed uncertainty using a probability analysis because it is a best practice when there is a large degree of uncertainty about the estimated benefits and costs. According to these officials, OMB Circular No. A-4 is useful because it made transparent what OMB reviewers expect for analytical support on economically significant rules and, more generally, because it serves as a blueprint for conducting a regulatory impact analysis. Changes Resulting from OIRA Review FDA made changes at the suggestion of OIRA to both the preamble and the regulatory text. FDA sent draft versions of the final rule to OIRA twice, once in October 2005 and again in June 2006. FDA staff stated that the rule was not returned by OIRA, but rather as a result of discussions, FDA staff made a number of changes and sent a second draft of the rule to OIRA. Our review of both copies of the draft final rule reviewed by OIRA as docketed by FDA found edits to the rule, both to the preamble and the rule itself. For example, in addition to editorial changes, changes to the draft rule text show that a requirement in the rule to save reserve samples for 3 years was changed to 2 years. This change is also reflected in the preamble language. Changes to the regulatory impact analysis—included with the rule in its entirety in keeping with FDA practice—show additions of text justifying the rulemaking and additional descriptions of calculations of costs associated with illness and injury resulting from contaminated or mislabeled dietary supplements. We found documentation of the final stage of OMB review on both drafts sent to OMB during that stage. Timeline October 25, 1994: Dietary Supplement Health and Education Act of 1994, Pub. L. No. 103-417, was enacted, granting FDA authority to prescribe good manufacturing practices for dietary supplements by regulation. November 1995: Industry group requested that FDA consider a rulemaking on good manufacturing practices for dietary supplements. February 6, 1997: An Advance Notice of Proposed Rulemaking was published in the Federal Register. 1998-1999: FDA management considered and committed to a rulemaking. 1999: FDA undertook a series of outreach activities, including public meetings and tours of dietary supplement manufacturing facilities. October 4, 2002: OIRA received the draft proposed rule. January 16, 2003: OIRA completed review of the draft proposed rule with change. March 13, 2003: The proposed rule was published in the Federal Register. October 25, 2005: OIRA received the draft final rule. November 29, 2005, October 4, 2006, and November 16, 2006: OIRA staff met with outside parties to discuss this rule. May 8, 2007: OIRA completed review of draft final rule with change. June 25, 2007: Final rule was published in the Federal Register, and FDA published an interim final rule on a process for requesting exemption from the dietary supplement current good manufacturing practices requirement for 100 percent identity testing of dietary ingredients. Food Labeling: Nutrient Content Claims, Expansion of the Nutrient Content Claim “Lean” (Lean Nutrient Claims) Identifying Information Rule Synopsis The rule amends FDA’s food labeling regulations to allow the use of the word “lean” more frequently by including it for use with “mixed dishes not measurable with a cup” that fulfill certain criteria for fat and cholesterol content. The intent was to provide reliable information that would assist consumers in maintaining healthy dietary practices. The rule was promulgated in response to a petition by Nestlé Corporation requesting that the category “mixed dishes not measurable with a cup” be included among those that can be called “lean.” Regulatory Requirements Addressed in the Final Rule FDA discussed the following generally-applicable statutes and executive orders in the final rule: NEPA: FDA determined that the final rule did not trigger the requirements of NEPA because the action is of a type that does not individually or cumulatively have a significant effect on the human environment. RFA: FDA certified that the rule would not have a significant economic impact on a substantial number of small entities. PRA: The rule did not contain any new information collection requests. UMRA: FDA did not expect the rule to result in any 1-year expenditure by state, local, and tribal governments, in the aggregate, or by the private sector, that would meet or exceed the relevant threshold of $122 million. Executive Order 12866 (Regulatory Planning and Review): FDA determined that the rule was not a significant regulatory action as defined by the executive order. However, OMB considered the rule a significant regulatory action. Therefore, FDA submitted the rule to OIRA for review. Executive Order 13132 (Federalism): FDA determined that the rule would have a preemptive effect on state law, but concluded that the preemptive effect of the rule is consistent with Executive Order 13132. Changes Resulting from OIRA Review After 100 days of formal OMB review, the final rule was not changed by OMB. Timeline January 9, 2004: Nestlé submitted the petition for the rulemaking. April 22, 2004: FDA filed the Nestlé petition for comprehensive review. November 25, 2005: Proposed rule published in the Federal Register. February 16, 2006: FDA notified state health commissioners, state agricultural commissioners, food program directors, and FDA field personnel and drug program directors of the intended amendment. September 12, 2006: OIRA received draft final rule. December 21, 2006: OIRA completed review of the draft final rule with no change. January 12, 2007: Final rule published in the Federal Register. Electronic Shareholder Forums Identifying Information Agency: Securities and Exchange Commission, Division of Corporation Rule classification: Not applicable RIN: 3235-AJ92 Federal Register citation: 73 Fed. Reg. 4450 Regulations.gov docket number: SEC-2007-1058 (proposed rule); SEC- 2008-0133 (final rule) Rule Synopsis The rule encourages the use of online shareholder forums. It removes legal ambiguity and both real and perceived impediments to private sector experimentation with the use of the Internet for communication. SEC stated that such communication technology can potentially better vindicate shareholders’ rights, for example, to elect directors and improve discussions on a variety of subjects that are now considered only periodically and indirectly through the proxy process. The rule gives liability protection to parties maintaining or operating an electronic shareholder forum. Regulatory Requirements Addressed in the Final Rule SEC discussed the following generally-applicable statutes in the final rule: RFA: SEC analyzed whether the rule would have a significant economic impact on a substantial number of small entities and prepared both an initial and final regulatory flexibility analysis. PRA: The rule did not include any new information collection requirements. Changes Resulting from OIRA Review As an independent regulatory agency, SEC is not subject to OIRA regulatory review under Executive Order 12866. Timeline 2006: Lawsuit resulted in court decision concerning proxies. May 7, 24, and 25, 2007: SEC hosted three proxy roundtables to gather information from a wide of variety of parties interested in proxy issues. July 12, 2007: Staff formally recommended proposed rule for SEC consideration. July 27, 2007: At an open meeting, SEC approved issuance of proposed rule. August 3, 2007: Proposed rule was published in the Federal Register. November 16, 2007: Staff formally recommended adopting final rule to SEC. November 28, 2007: At an open meeting, SEC approved issuance of final rule. January 25, 2008: Final rule was published in Federal Register. Internet Availability of Proxy Materials (Internet Proxies) Identifying Information Agency: Securities and Exchange Commission, Division of Corporation Rule classification: Major RIN: 3235-AJ47 Federal Register citation: 72 Fed. Reg. 4148 Regulations.gov docket number: SEC-2005-0386 (proposed); SEC-2007- 0134 (final) Rule Synopsis The rule provides an alternative method of providing proxy materials to shareholders by posting the materials on an Internet site and notifying shareholders of their availability. The rule is voluntary, and issuers of securities are not required to offer shareholders an electronic distribution option. Regulatory Requirements Addressed in the Final Rule SEC discussed the following generally-applicable statutory requirements in the final rule: PRA: The rule included new information collections for which SEC completed and submitted an Information Collection Request to OIRA for approval. RFA: SEC analyzed whether the rule would have a significant economic impact on a substantial number of small entities and prepared an initial and final regulatory flexibility analysis. Changes Resulting from OIRA Review As an independent regulatory agency, SEC is not subject to OIRA regulatory review under Executive Order 12866. Timeline Spring 2005: SEC regulatory development staff began drafting the proposed rule. November 14, 2005: Staff formally recommended that SEC approve proposed rule. November 29, 2005: SEC approved issuance of proposed rule at an open meeting. December 15, 2005: The proposed rule was published in the Federal Register. November 27, 2006: Staff formally recommended adopting final rule for SEC consideration. December 13, 2006: SEC approved issuance of final rule at an open meeting. January 29, 2007: The final rule was published in the Federal Register. Extension of Interactive Data Voluntary Reporting Program on the EDGAR System to Include Mutual Fund Risk/Return Summary Information (Mutual Fund Data Reporting) Identifying Information Agency: Securities and Exchange Commission, Division of Investment Rule classification: Not applicable RIN: 3235-AJ59 Federal Register citation: 72 Fed. Reg. 39,290 Regulations.gov docket number: SEC-2007-0220 (proposed rule); SEC- 2007-0958 (final rule) Rule Synopsis The rule encourages mutual funds to participate in a voluntary reporting program to tag selected risk/return data in a standard format in eXtensible Business Reporting Language (XBRL). According to the preamble, “ith almost half of all U.S. households owning mutual funds. . .improving the quality of mutual fund disclosure is important to millions of Americans.” When tagged in XBRL, the data become interactive and can be retrieved, searched, or analyzed by software applications in an automated fashion. The rule encourages voluntary participation in a program of electronically tagging risk/return information by mutual funds, so that SEC can evaluate the usefulness of such tagging to interested parties. Information submitted would be included in the Electronic Data Gathering, Analysis, and Retrieval System (EDGAR) filings. Regulatory Requirements Addressed in the Final Rule SEC discussed the following generally-applicable statutes in the final rule: PRA The rule included new information collections for which SEC completed and submitted an Information Collection Request to OIRA for approval. RFA: SEC analyzed whether the rule would have a significant economic impact on a substantial number of small entities and prepared both an initial and final regulatory flexibility analysis. Changes Resulting from OIRA Review As an independent regulatory agency, SEC is not subject to OIRA regulatory review under Executive Order 12866. Timeline March 2006: ICI announced an initiative to create a taxonomy of interactive data tags for the risk/return summary. June 12, 2006: SEC held a public roundtable on the use of interactive data for mutual funds. September 24, 2006: First draft of ICI tags distributed to working group for comment; SEC staff provided comments on draft taxonomy over next several weeks. January 4, 2007: ICI released the XBRL tags to the public. January 18, 2007: Staff formally recommended proposing release for SEC consideration. January 31, 2007: SEC approved issuance of proposed rule. February 12, 2007: Proposed rule published in the Federal Register. May 16, 2007: ICI submitted taxonomy to XBRL International for acknowledgment. June 6, 2007: Staff formally recommended adopting final rule for SEC. June 20, 2007: SEC approved issuance of final rule at an open meeting. July 17, 2007: Final rule was published in the Federal Register. Mutual Fund Redemption Fees Identifying Information (Mutual Fund Redemption Fees) Identifying Information Agency: Securities and Exchange Commission, Division of Investment Rule classification: Major RIN: 3235-AJ51 Federal Register citation: 71 Fed. Reg. 58,257 Regulations.gov docket number: SEC-2006-0292 (proposed rule); SEC- 2006-1284 (final rule) Rule Synopsis The rule amends SEC Rule 22c-2, which permits registered open-end investment companies (funds) to impose a redemption fee of up to 2 percent on the redemption of fund shares. The rule is intended to allow funds to recoup some of the direct and indirect costs of frequent trading and to reduce the dilution of fund shares. The rule also requires that the fund, regardless of whether it imposes a redemption fee, enter into a written agreement with each of its intermediaries (such as broker-dealers or retirement plan administrators) under which the intermediaries must provide the fund, upon request, information about the identity of shareholders and information about their transactions in fund shares. These amendments are designed to address certain technical issues that arose after the rule was adopted and reduce the cost of compliance to both funds and financial intermediaries. Regulatory Requirements Addressed in the Final Rule SEC discussed the following generally-applicable statutory requirements in the final rule: PRA: The rule included new information collections for which SEC completed and submitted an Information Collection Request to OIRA for approval. RFA: SEC analyzed whether the rule would have a significant economic impact on a substantial number of small entities and prepared both an initial and final regulatory flexibility analysis. Changes Resulting from OIRA Review As an independent regulatory agency, SEC is not subject to OIRA regulatory review under Executive Order 12866. Timeline March 11, 2005: SEC adopted rule 22c-2 under the Investment Company Act; in the final notice, SEC solicited additional comment on 22c-2. February 3, 2006: Staff formally recommended that SEC amend Rule 22c-2. February 28, 2006: SEC approved issuance of proposed amendments to Rule 22c-2. March 7, 2006: The proposed amended rule was published in the Federal Register. September 1, 2006: Staff formally recommended that SEC adopt amended rule. September 27, 2006: SEC approved issuance of final rule. October 3, 2006: The final rule was published in the Federal Register. This appendix presents examples of different methods that agencies used to document the OIRA review process under Executive Order 12866 in their rulemaking dockets (See figs. 5, 6, and 7.) Appendix III: Examples of OIRA Review Documentation Appendix IV: Comments from the Office of Management and Budget Appendix V: Comments from the Securities and Exchange Commission Appendix VI: Comments from the Department of Health and Human Services, Food and Drug Administration Appendix VII: Comments from the Environmental Protection Agency Appendix VIII: GAO Contact and Staff Acknowledgments Acknowledgements In addition to the contact named above, Timothy Bober, Assistant Director; Timothy Guinane; Edward Leslie; Andrea Levine; James McTigue, Susan Offutt, Melanie Pappasian; Jacquelyn Pontious; Robert Powers; Joseph Santiago; Wesley Sholtes; William Trancucci, Michael Volpe; Gregory Wilmoth; and Diana Zinkl made key contributions to this report. Related GAO Products Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System. GAO-09-216. Washington, D.C.: January 8, 2009. Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA’s Integrated Risk Information System. GAO-08-440. Washington, D.C.: March 7, 2008. Telecommunications: FCC Should Take Steps to Ensure Equal Access to Rulemaking Information. GAO-07-1046. Washington, D.C.: September 6, 2007. Reexamining Regulations: Opportunities Exist to Improve Effectiveness and Transparency of Retrospective Reviews. GAO-07-791. Washington, D.C.: July 16, 2007. Regulatory Flexibility Act: Congress Should Revisit and Clarify Elements of the Act to Improve Its Effectiveness. GAO-06-998T. Washington, D.C.: July 20, 2006. Federal Rulemaking: Perspectives on 10 Years of Congressional Review Act Implementation. GAO-06-601T. Washington, D.C.: March 30, 2006. Federal Rulemaking: Past Reviews and Emerging Trends Suggest Issues That Merit Congressional Attention. GAO-06-228T. Washington, D.C.: November 1, 2005. Electronic Rulemaking: Progress Made in Developing Centralized E- Rulemaking System. GAO-05-777. Washington, D.C.: September 9, 2005. Regulatory Reform: Prior Reviews of Federal Regulatory Process Initiatives Reveal Opportunities for Improvements. GAO-05-939T. Washington, D.C.: July 27, 2005. Economic Performance: Highlights of a Workshop on Economic Performance Measures. GAO-05-796SP. Washington, D.C.: July 2005. Paperwork Reduction Act: New Approach May Be Needed to Reduce Government Burden on Public. GAO-05-424. Washington, D.C.: May 20, 2005. Unfunded Mandates: Views Vary About Reform Act’s Strengths, Weaknesses, and Options for Improvement. GAO-05-454. Washington, D.C.: March 31, 2005. Unfunded Mandates: Analysis of Reform Act Coverage. GAO-04-637. Washington, D.C.: May 12, 2004. Rulemaking: OMB’s Role in Reviews of Agencies’ Draft Rules and the Transparency of Those Reviews. GAO-03-929. Washington, D.C.: September 22, 2003. Federal Rulemaking: Procedural and Analytical Requirements at OSHA and Other Agencies. GAO-01-852T. Washington, D.C.: June 14, 2001. Regulatory Flexibility Act: Implementation in EPA Program Offices and Proposed Lead Rule. GAO/GGD-00-193. Washington, D.C.: September 20, 2000. Federalism: Previous Initiatives Have Little Effect on Agency Rulemaking. GAO/T-GGD-99-131. Washington, D.C.: June 30, 1999. Regulatory Accounting: Analysis of OMB’s Reports on the Costs and Benefits of Federal Regulation. GAO/GGD-99-59. Washington, D.C.: April 20, 1999. Federal Rulemaking: Agencies Often Published Final Actions Without Proposed Rules. GAO/GGD-98-126. Washington, D.C.: August 31, 1998. Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act. GAO/GGD-98-120. Washington, D.C.: July 9, 1998. Regulatory Reform: Agencies Could Improve Development, Documentation, and Clarity of Regulatory Economic Analyses. GAO/RCED-98-142. Washington, D.C.: May 26, 1998. Regulatory Reform: Implementation of Small Business Advocacy Review Panel Requirements. GAO/GGD-98-36. Washington, D.C.: March 18, 1998. Unfunded Mandates: Reform Act Has Had Little Effect on Agencies’ Rulemaking Actions. GAO/GGD-98-30. Washington, D.C.: February 4, 1998. Regulatory Reform: Changes Made to Agencies’ Rules Are Not Always Clearly Documented. GAO/GGD-98-31. Washington, D.C.: January 8, 1998. Managing for Results: Regulatory Agencies Identified Significant Barriers to Focusing on Results. GAO/GGD-97-83. Washington, D.C.: June 24, 1997.
Regulation is one of the principal tools that the government uses to implement public policy. As part of the rulemaking process federal agencies must comply with an increasing number of procedural and analytical requirements. GAO was asked to examine how broadly applicable rulemaking requirements cumulatively have affected (1) agencies' rulemaking processes, in particular including effects of requirements added to the process since 2003, and (2) transparency of the Office of Information and Regulatory Affairs (OIRA) regulatory review process. To address these objectives, GAO reviewed selected rules issued between January 2006 and May 2008 and associated dockets and also interviewed knowledgeable agency and OIRA officials. The agencies GAO reviewed had little data on the time and resources used to comply with regulatory requirements making it difficult to evaluate the effects of these requirements on rulemaking. All the agencies set milestones for regulatory development. During our review, only the Department of Transportation (DOT) provided data showing that it tracked and reported on milestones, but EPA and FDA provided similar information in their agency comments. The agencies GAO reviewed also could provide little systematic data on the resources they used--such as staff hours, contract costs, and other expenses--in developing rules. DOT and SEC have attempted to identify staff time expended on individual rules but are encountering difficulties generating usable and reliable data. Despite the challenges they have encountered in attempting to track time and resources in rulemaking, agency officials identified potential benefits to the management of their processes if they had such information to evaluate. Systematic tracking and reporting by agencies on their schedules and milestones would also be consistent with internal control standards. Our review of 139 major rules including 16 case-study rules revealed that most triggered analytical requirements under the Paperwork Reduction Act (PRA), Regulatory Flexibility Act (RFA), and Executive Order 12866, but few other requirements. Agency officials reported that requirements added to the rulemaking process by the Office of Management and Budget (OMB) since 2003 sometimes required a learning period when first implemented, but their agencies either already performed the added requirements or recognized the revisions as best practices. The officials instead identified long-standing requirements of the PRA and the RFA as generally requiring a more significant investment of resources. Based on the limited information available, the average time needed to complete a rulemaking across our 16 case-study rules was about 4 years, with a range from about 1 year to nearly 14 years, but there was considerable variation among agencies and rules. OIRA's reviews of agencies' draft rules often resulted in changes. Of 12 case-study rules subject to OIRA review, 10 resulted in changes, about half of which included changes to the regulatory text. Agencies used various methods to document OIRA's reviews, which generally met disclosure requirements, but the transparency of this documentation could be improved. In particular, some prior issues persist, such as uneven attribution of changes made during the OIRA review period and differing interpretations regarding which changes are "substantive" and thus require documentation. Out of eight prior GAO recommendations to improve the transparency OIRA has implemented only one--to clarify information posted about meetings with outside parties regarding draft rules under OIRA review.
Background GPRAMA is a significant enhancement of GPRA, which was the centerpiece of a statutory framework that Congress put in place during the 1990s to help resolve long-standing performance and management problems in the federal government and provide greater accountability for results. GPRAMA was likewise intended to address a number of federal performance management challenges, including focusing attention on crosscutting issues, enhancing the use and usefulness of performance information, increasing transparency, and ensuring leadership commitment and attention to improving performance. Our June 2013 report assessing the initial government-wide implementation of GPRAMA described some of the important steps OMB and agencies had taken to implement key provisions of GPRAMA. These included developing agency-level and government-wide goals, designating officials to key leadership roles, and using the Performance Improvement Council (PIC) to facilitate the exchange of information to strengthen agency performance management. GPRAMA revises existing provisions and adds new requirements, including the following: GPRAMA includes requirements that OMB and agencies establish different types of government-wide and agency-level performance goals. These include: Government-wide: cross-agency priority (CAP) goals. OMB is required to coordinate with agencies to establish federal government priority goals—otherwise referred to as CAP goals—that include outcome-oriented goals covering a limited number of policy areas as well as goals for management improvements needed across the government. The act also requires that OMB—with agencies— develop annual federal government performance plans to, among other things, define the level of performance to be achieved through each of the CAP goals. OMB established the first set of CAP goals for a 2-year interim period in February 2012. In March 2014, OMB identified the next set of CAP goals, which it is to update every 4 years. Agency-level: agency priority goals (APG). Every 2 years, GPRAMA requires the heads of certain agencies, in consultation with OMB, to identify a subset of agency goals to be identified as APGs. These goals are to reflect the highest priorities of each of these agencies, and to be informed by the CAP goals as well as consultations with relevant congressional committees and other interested parties. Twenty-three agencies identified a total of 91 APGs covering fiscal years 2014 through 2015. GPRAMA provided a statutory basis for selected senior leadership positions that had been created by executive orders, presidential memorandums, or OMB guidance. GPRAMA established these positions in law, provided responsibilities for various aspects of performance improvement, and elevated some of them as described below. Chief operating officer (COO). The deputy agency head, or equivalent, is designated COO, with overall responsibility for improving agency management and performance. Performance improvement officer (PIO). Agencies are required to designate a senior executive within the agency as PIO, who reports directly to the COO and has responsibilities to assist the agency head and COO with performance management activities. Goal leaders. GPRAMA requires that goal leaders be designated for CAP goals and APGs and OMB guidance requires goal leaders be designated for strategic objectives. They are designated for CAP goals, strategic objectives, and APGs. CAP goals have at least two goal leaders—one from the Executive Office of the President and the other from a key responsible agency. APGs have a goal leader, and OMB guidance directs agencies to designate a deputy goal leader to support the goal leader. GPRAMA also established the PIC in law and included additional responsibilities. Originally created by a 2007 executive order, the PIC is charged with assisting OMB to improve the performance of the federal government and achieve the CAP goals. Among its other responsibilities, the PIC is to facilitate the exchange among agencies of useful performance improvement practices and work to resolve government-wide or crosscutting performance issues. The PIC is chaired by the Deputy Director for Management at OMB and includes agency PIOs from each of the 24 CFO Act agencies as well as other PIOs and individuals designated by the chair. GPRAMA and related OMB guidance require the regular review of progress in achieving goals and objectives through performance reviews. Strategic reviews. OMB’s 2012 guidance implementing GPRAMA established a strategic review process in which agencies, beginning in 2014, were to conduct leadership-driven, annual reviews of their progress toward achieving each strategic objective—the outcome or impact the agency is intending to achieve through its various programs and initiatives—established in their strategic plans (and updated in their annual performance plans). Data-driven reviews. Data-driven performance reviews are regularly scheduled—at least quarterly—structured meetings used by organizational leaders and managers to review and analyze data on progress toward key performance goals and other management- improvement priorities. For each APG, GPRAMA requires agencies to conduct reviews to assess progress toward the goal and risk of not meeting it, and develop strategies to improve performance, as needed. These reviews are to be led by the agency head and COO, with the support of the PIO, and include relevant goal leaders. Coordination with relevant parties both within and outside the agency that contribute to goal accomplishment is also required. GPRAMA also requires that the Director of OMB, with the support of the PIC, review progress toward each CAP goal with the appropriate lead government official at least quarterly. Specifically, these reviews should examine the progress made over the most recent quarter, overall trends, the likelihood of meeting the planned level of performance and, if necessary, strategies to improve performance. GPRAMA includes several provisions related to reporting of performance information. Performance.gov. OMB is required to develop a single, government- wide performance website to communicate government-wide and agency performance information. The website—implemented by OMB as Performance.gov—is required to make available information on APGs and CAP goals, updated on a quarterly basis; agency strategic plans, annual performance plans, and annual performance reports; and an inventory of all federal programs. Program inventory. OMB is required to make publicly available, on a central government-wide website, a list of all federal programs identified by agencies, along with related budget and performance information. Performance information quality. Agencies are required to describe how they are ensuring the accuracy and reliability of the data used to measure progress toward APGs and performance goals, including an identification of the following five areas: o The means used to verify and validate ; o the sources for the data; o the level of accuracy required for the intended use of the data; o any limitations to the data at the required level of accuracy; o how the agency will compensate for such limitations (if needed) to reach the required level of accuracy. Agencies are required to provide information to OMB that addresses all five requirements for each of their APGs for publication on Performance.gov. Agencies also must address all five requirements for performance goals in their performance plans and reports. Major management challenges. Agencies are required to address major management challenges in their performance plans. These challenges may include programs or management functions that have greater vulnerability to fraud, waste, abuse, and mismanagement, such as those issues included in our high-risk list or identified by inspectors general, where a failure to perform well could seriously affect an agency’s ability to achieve its mission or goals. The concepts described above and their relationships to each other are represented in figure 2, which summarizes them and highlights areas in which our recent work has focused. The Executive Branch Needs to Take Additional Actions to Address Crosscutting Issues, but OMB Has Increased Emphasis on Governance of Cross-Agency Priority Goals OMB and Agencies Continue to Miss Opportunities to Address Crosscutting Issues Many of the meaningful results that the federal government seeks to achieve, such as those related to protecting the environment, promoting public health, and providing homeland security, require the coordinated efforts of more than one federal agency, level of government, or sector. Even with sustained leadership, crosscutting issues are difficult to address because they may require agencies and Congress to reexamine (within and across various mission areas) the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities. Collaboration and improved working relationships across agencies are critical tools for addressing the issues of fragmentation, overlap, and duplication our recent work has highlighted. Additionally, they are fundamental to addressing many of the issues that we have designated as high risk due to their vulnerabilities to fraud, waste, abuse, and mismanagement or most in need of broad-based transformation. We have found that resolving many of these issues requires better collaboration among agencies, levels of government, and sectors. For more than two decades, we have reported on agencies’ missed opportunities for improved collaboration through the effective implementation of GPRA and, more recently, GPRAMA. Now, more than 20 years since GPRA’s passage, our work continues to demonstrate that the needed collaboration is not sufficiently widespread. The examples in the textbox below show areas from our high-risk list–improving and modernizing federal disability programs and improving federal oversight of food safety–that demonstrate the need for greater collaboration on crosscutting issues. Examples of 2015 High-Risk Areas Demonstrating the Continued Need to Address Crosscutting Issues Federal Disability Programs Remain Fragmented and a High-Risk Federal disability programs across government remain fragmented and in need of modernization. Numerous federal programs provide a patchwork of services and supports to people with disabilities and work independently without a unified vision and strategy or set of goals to guide their outcomes. Our 2015 update to our High-Risk Series found that progress in improving and modernizing disability programs has been mixed. OMB has made some progress toward enhancing coordination across programs that support employment for people with disabilities, but it has not established a larger vision for disability programs that include appropriate government-wide goals and strategies for achieving those goals. OMB needs a government- wide action plan that describes how federal agencies will improve coordination and set measurable goals that support employment for people with disabilities beyond the public sector. Such a plan should identify additional opportunities to build capacity and leverage existing government resources. Continued planning, management focus, and coordination can improve and modernize federal disability programs. Federal Food Safety Is a High-Risk Area and in Need of Improved Foodborne illness is a common, costly, yet largely preventable public health concern. According to the Centers for Disease Control and Prevention, each year nearly 50 million people in the United States get sick and roughly 3,000 die due to foodborne illness. For more than a decade, we have reported on the fragmented federal food safety system and we added federal oversight of food safety to our high-risk areas because of risks to the economy and public health and safety. In December 2014 we reported that the Department of Health and Human Services (HHS) and the U.S. Department of Agriculture (USDA) have taken steps to implement GPRAMA requirements, but could more fully address crosscutting food safety efforts. HHS and USDA vary in the amount of detail they provide on their crosscutting food safety efforts and they do not include several relevant crosscutting efforts in their strategic and performance planning documents. HHS and USDA have mechanisms in place to facilitate interagency coordination on food safety that focus on specific issues, but they do not provide for broad-based, centralized collaboration. A centralized collaborative mechanism on food safety is important to foster effective interagency collaboration and enhance food safety oversight. We recommended that HHS and USDA build upon their efforts to implement GPRAMA requirements to fully address crosscutting food safety efforts. We asked Congress to consider directing OMB to develop a government-wide food safety performance plan and formalize the Food Safety Working Group through statute to help ensure sustained leadership across food safety agencies over time. HHS and USDA agreed with our recommendation and in February 2015, HHS updated its strategic plan to more fully describe how it is working with other agencies to achieve its food-safety-related goals and objectives. As of August 2015, our recommendation to USDA and matters for congressional consideration remain unimplemented. Our annual reports on areas where opportunities exist for executive branch agencies or Congress to reduce, eliminate, or better manage fragmentation, overlap, or duplication; to achieve cost savings; or to enhance revenue have also included areas in which greater collaboration is needed to address crosscutting issues. See textbox below. Examples of Crosscutting Issues Identified in GAO’s 2015 Annual Report on Fragmentation, Overlap, and Duplication Fragmentation, Overlap, and Potential for Duplication Exists among Nonemergency Medical Transportation Programs Access to transportation services is essential for millions of Americans to fully participate in society and access human services, including medical care. Our April 2015 report on opportunities to reduce fragmentation, overlap, and duplication reported that 42 programs across six federal departments provide funding for nonemergency medical transportation (NEMT) to individuals who cannot provide their own transportation due to age, disability, or income constraints. Coordination of NEMT programs at the federal level is limited and there is fragmentation, overlap, and potential for duplication across these programs. An interagency coordinating council was developed to enhance federal, state, and local coordination activities, and it has taken some actions to address program coordination. However, the council has provided limited leadership and has not convened since 2008. To improve efficiency, we recommended that the Department of Transportation, which chairs the council, take steps to enhance coordination among the programs that provide NEMT. The department agreed that more work is needed, and said that the Federal Transit Administration is asking its technical assistance centers to assist in developing responses to NEMT challenges. In addition, as of June 1, 2015, the Federal Transit Administration reported working to develop a new 2-year strategy for addressing NEMT coordination among federal agencies and plans to develop and propose a cost-sharing model that can be applied to federal programs that provide funding for NEMT. Strengthened Coordination Could Increase Efficiency and Effectiveness of Consumer Product Safety Oversight The oversight of consumer product safety is a complex system involving a number of federal agencies. However, as our April 2015 report on opportunities to reduce fragmentation, overlap, and duplication highlighted, oversight of consumer product safety is fragmented across agencies, overlaps jurisdictions, or is unclear for certain products. In some cases agencies regulate different components of or carry out different regulatory activities for the same product. Agencies reported that they collaborate to address specific consumer product safety topics. For example, officials from the Coast Guard, which regulates safety standards for recreational boats, said they work informally with the Consumer Product Safety Commission when the need arises. However, we did not identify a formal mechanism for addressing such issues more comprehensively, and no single entity or mechanism exists to help the agencies that collectively oversee consumer product safety. Without this, agencies may miss opportunities to leverage resources and address challenges, including those related to fragmentation and overlap. In response to our recommendation that the Coast Guard and the Consumer Product Safety Commission establish a formal coordination mechanism, in May 2015 the two agencies signed a formal policy document establishing such a mechanism. We also recommended that Congress should establish a formal collaboration mechanism to address comprehensive oversight and inefficiencies related to fragmentation and overlap. As of August 2015, no formal collaboration mechanism had been established. The textbox below shows additional examples of areas in which we have identified the need for additional work to address crosscutting issues. Examples from GAO’s Work from 2013-2015 of Continued Challenges in Addressing Crosscutting Issues Additional Leadership Needed to Achieve Interagency Efforts for Department of Agriculture and Department of Health and Human Services (HHS) veterinarians perform critical work for public and animal health and for emergency response to economically devastating or highly contagious animal diseases. In May 2015, we reported that the Office of Personnel Management (OPM) and other federal agencies have taken steps toward achieving the goals outlined in OPM’s government-wide strategic plan for the veterinarian workforce, primarily through an interagency group OPM created. However, in each of the three goals, the group did not follow through on next steps and made limited progress. According to OPM officials, the group did not consistently monitor progress toward goals in part because it did not have sufficient leadership support from participating agencies. OPM agreed with our recommendation that it obtain leadership support for achieving its goals, and stated that it designed and will aid in establishing a Veterinary Medical Officer Executive Steering Committee that will, among other things, provide leadership and ensure progress toward stated goals. Federal Strategy Needed to Ensure Efficient and Effective Delivery of Services for Older Adults In May 2015, we reported that five federal agencies across four departments had one or more programs that operate within a system of home and community-based services (HCBS) and related supports that older adults often require to live as independently as possible in their homes and communities. The Older Americans Act of 1965 requires the Administration on Aging, within HHS, to facilitate collaboration among federal agencies; however, the five agencies that fund these services and supports do so, for the most part, independently. To help ensure that agencies’ resources for HCBS and supports are used efficiently and effectively, we recommended that HHS facilitate development of a cross-agency federal strategy. HHS agreed with our recommendation. While much of our recent work has focused on the need for improved collaboration to address crosscutting issues, we have also reported on areas, including one high-risk area, in which agencies have made progress or are generally effectively coordinating. The text box below discusses two of these examples. Examples from GAO’s Work from 2013-2015 of Areas in Which Agencies Are Doing Well or Making Progress in Addressing Crosscutting Issues Coordination of DOD’s and NNSA’s Nuclear Weapons Stockpile Responsibilities Is Generally Consistent with Key Practices The Nuclear Weapons Council (Council) serves as the focal point of Department of Defense (DOD) and National Nuclear Security Administration (NNSA) interagency activities to maintain the U.S. nuclear weapons stockpile. In May 2015, we reported that the Council’s actions to coordinate DOD’s and NNSA’s nuclear weapons stockpile responsibilities are generally consistent with most of the key practices we have identified for collaborating across agency boundaries. For example, according to Council documents, the Council and its support committees meet on a regular basis to monitor, evaluate, and report on nuclear weapons stockpile issues. These meetings include periodic oversight briefings on nuclear weapon refurbishment programs. We made recommendations to the Secretaries of Defense and Energy to address two areas in which actions could be enhanced: (1) having up-to-date, written agreements and guidance that establish compatible policies, procedures, and other means to operate across agency boundaries and defines roles and responsibilities and (2) regularly including all relevant participants. The departments generally agreed with our recommendations. Progress Made on High-Risk Area of Sharing and Managing The federal government has made significant progress in promoting the sharing of information on terrorist threats, an area we designated as high risk in 2005. In February 2015, we reported that significant progress was made in this area by developing a more structured approach to achieving the Information Sharing Environment (Environment) and by defining the highest priority initiatives to accomplish. In December 2012, the President signed the National Strategy for Information Sharing and Safeguarding (Strategy), which provides guidance on the implementation of policies, standards, and technologies that promote secure and responsible national security information sharing. In 2013, in response to the strategy, the Program Manager for the Environment released the Strategic Implementation Plan for the National Strategy for Information Sharing and Safeguarding (Implementation Plan). The Implementation Plan provides a roadmap for the implementation of the priority objectives in the Strategy, assigns stewards to coordinate each priority objective, and provides time frames and milestones for achieving the outcomes in each objective. The steward is responsible for ensuring that participating agencies communicate and collaborate to complete the objective, while also raising to senior management any issues that might hinder progress. Although progress has been made, more work remains to be done to fully address the issues identified in this high- risk area. More Effective Implementation of GPRAMA and the DATA Act Would Help Address Crosscutting Issues If fully and effectively implemented, GPRAMA and the Digital Accountability and Transparency Act of 2014 (DATA Act) hold promise for helping to address crosscutting issues. For example, GPRAMA establishes a framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. Effective implementation of GPRAMA could help clarify desired outcomes, address program performance spanning multiple organizations, and facilitate future actions to reduce, eliminate, or better manage fragmentation, overlap, and duplication. The DATA Act also offers the potential to help address crosscutting issues, as it requires agencies to publicly report information about any funding made available to, or expended by, an agency. These actions would allow executive branch agencies and Congress to accurately measure the costs and magnitude of federal investments. As we have previously reported, the DATA Act holds great promise for improving the efficiency and effectiveness of the federal government and for addressing persistent government management challenges. For example, as our annual reports on fragmentation, overlap, and duplication have highlighted, a complete picture of federal programs, along with related funding and performance information, is critical for addressing these issues. Data-driven reviews. Data-driven reviews have had a positive effect on collaboration among officials within agencies, but agencies are still missing opportunities to include stakeholders from other federal agencies and thus promote collaboration across agencies. Specifically, in our July 2015 report on data-driven reviews, we found that 21 of the 22 agencies we surveyed that reported holding in-person data-driven reviews said that their data-driven reviews have had a positive effect on collaboration among officials from different offices or programs within the agency. Despite the positive effects of reviews on internal collaboration, most agencies reported that relevant contributors from other federal agencies did not participate in their reviews. This situation has not changed since our July 2014 report on the role of the agency priority goal leader, in which we found that some goal leaders reported that goal contributors from other federal agencies, and even different components within the same federal agency, were not included in their data-driven reviews. As we previously reported in 2013, failing to include all goal contributors may lead to missed opportunities to have all the relevant parties apply their knowledge of the issues and participate in developing solutions to performance problems. As a result, in that 2013 report, we recommended that OMB work with the PIC and other relevant groups to identify and share promising practices to help agencies extend their performance reviews to include, as relevant, representatives from outside organizations that contribute to achieving their agency performance goals. As of June 2015, OMB had not taken action in response to this recommendation. OMB staff said that while agencies have found that at times it is useful to engage external stakeholders in improving program delivery, officials view data-driven reviews as internal agency management meetings and believe it would not always be appropriate to regularly include external representatives. We continue to believe that more active involvement from external contributors is needed, as appropriate, and continue to urge OMB to implement our recommended actions. Strategic reviews. Effective implementation of strategic reviews could help identify opportunities to reduce, eliminate, or better manage instances of fragmentation, overlap, and duplication because agencies are to identify the various organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each objective, both within and outside the agency. Where progress in achieving an objective is lagging, the reviews are intended to identify strategies for improvement, such as strengthening collaboration to better address crosscutting challenges or using evidence to identify and implement more effective program designs. If successfully implemented in a way that is open, inclusive, and transparent—to Congress, delivery partners, and a full range of stakeholders—this approach could help decision makers assess the relative contributions of various programs to a given objective. Successful strategic reviews could also help decision makers identify and assess the interplay of public policy tools that are being used to ensure that those tools are effective and mutually reinforcing and that results are being efficiently achieved. To that end, in July 2015 we reported on seven practices that can help ensure agencies conduct effective strategic reviews. These practices include identifying the various strategies and other factors that influence outcomes and determining which are most important, identifying key stakeholders to participate in the review, and assessing the effectiveness in achieving strategic objectives and identifying actions to improve implementation and impact. Program inventories. One of the GPRAMA provisions that has the potential to help in addressing crosscutting issues is the requirement that agencies develop inventories of their programs, though in October 2014 we reported that several issues limit the usefulness of the inventories. As our prior work has highlighted, creating a comprehensive list, or program inventory, of federal programs, along with related performance and funding information, could provide decision makers with critical information that could be used to better address crosscutting issues. However, in developing the inventory OMB allowed agencies to define their programs using different approaches, but within a broad definition of what constituted a program consistent with several characteristics. Moreover, OMB’s guidance presents five possible approaches agencies could take to define their programs and notes that agencies could use one or more of those approaches in doing so. As a result, we found that the use of inconsistent approaches by agencies to define their programs limits the comparability of programs within agencies as well as government-wide. To illustrate the shortcomings of the inventory, in our report on program inventories we compared relevant agencies’ inventories for various science, technology, engineering, and mathematics education and nuclear nonproliferation programs to programs identified in our past work. We were unable to identify in the inventories a large majority of the programs previously identified in our work: 9 of the 179 programs matched exactly and 51 others were identified based on program descriptions. According to OMB staff, agencies used different approaches for valid and legitimate reasons and a one-size-fits-all approach would not work for all agency inventories. While this may be true, OMB could do more to direct agencies to find common ground on similar programs. One of OMB’s stated purposes for the inventories is to facilitate coordination across programs that contribute to similar outcomes. However, as we discovered through our interviews with agency officials involved with the inventory efforts, none of the agencies sought input from other agencies on how they defined and identified their programs. We concluded that if agencies worked together to more consistently define their programs, it could also help them identify where they have programs that contribute to similar outcomes, and therefore have opportunities to collaborate. We made several recommendations to OMB aimed at presenting a more coherent picture of all federal programs in agency inventories, but OMB has not yet taken action to address these items. OMB planned to publish updated inventories in May 2014. However, OMB put the plans for updating the inventories on indefinite hold and agencies have not published updated inventories with program-level budget information, in part due to the enactment of the DATA Act. OMB staff told us that they are considering how implementation of DATA Act requirements can be tied to the program inventories. Agency reporting for both sets of requirements is web-based, which could more easily enable linkages between the two or facilitate incorporating information from each other. The House and Senate versions of the Taxpayers Right-to-Know Act would require that program inventories also include, to the extent practicable, financial information required to be reported under the DATA Act for each program activity. If enacted, the Taxpayers Right-to-Know Act could result in detailed financial and performance information for federal programs, all in one place. In our July 2015 testimony on the implementation of the DATA Act, we recommended that OMB accelerate efforts to determine how best to merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal program inventory. OMB and Treasury did not comment on this recommendation. However, the Acting Deputy Director for Management and Controller at OMB stated at the July 2015 hearing that, because the staff that would be involved in working on the program inventories were heavily involved in DATA Act implementation, he would not expect an update of the program inventories to happen before May 2017. OMB Has Increased Emphasis on Crosscutting Issues through Cross- Agency Priority Goal Guidance and Governance Cross-Agency Priority Goals Focus Attention on Crosscutting Issues GPRAMA’s provisions for establishing and managing achievement toward cross-agency priority (CAP) goals make up another area in which the act offers the potential to address crosscutting issues. CAP goals, which GPRAMA requires OMB to develop in coordination with agencies, are intended to cover areas where increased cross-agency collaboration is needed to improve progress toward shared, complex policy or management objectives, such as improving our nation’s cybersecurity. OMB established the first set of CAP goals for a 2-year interim period in February 2012. In March 2014, OMB released the next set of CAP goals, which it will update every 4 years, as required by GPRAMA. OMB is required to coordinate with agencies to publish progress updates on Performance.gov on a quarterly basis for each CAP goal. Of the 15 current goals, 7 are mission-oriented goals and 8 are management- focused goals (see figure 3). When selecting the current set of CAP goals, OMB staff told us they considered factors such as the administration’s priorities, GPRAMA requirements, and our prior work. According to documents we reviewed and agency officials we spoke with, OMB also consulted with government-wide councils, agencies, and congressional committees when developing potential CAP goals. For example, OMB staff told us that they added the Insider Threat and Security Clearance Reform CAP goal based on congressional input. We are conducting an ongoing assessment of the current set of CAP goals, shown in figure 3 above, and selected 7 goals for our review. The objectives of this review are to (1) assess the extent to which lessons learned from implementing the interim CAP goals were incorporated into the governance of the current CAP goals; (2) assess the extent to which GPRAMA requirements for assessing and reporting on CAP goal progress are included in the selected CAP goal quarterly progress updates; and (3) assess the initial progress in implementing the selected CAP goals. We plan to issue this work at the end of 2015 and will provide updated information on selected CAP goals’ progress at that time. CAP goal leaders or their teams from each of the seven selected CAP goals told us that the CAP goal designation led to increased emphasis and leadership attention within their agencies and the Executive Office of the President (EOP) on the CAP goal area. OMB Is Taking Steps to Enhance the Governance and Implementation of Current CAP Goals OMB has made several improvements to its CAP goal guidance, in part in response to our prior work, which found that OMB should strengthen CAP goal reviews. Our June 2014 report on CAP goal reviews found, among other things, that all reviews did not meet leading practices for leadership involvement, participation by key officials, and follow-up. In response to our recommendations, OMB and the PIC took several actions, including updating the guidance to CAP goal teams and outlining the role of OMB leadership and the PIC. For example, the guidance specifies that OMB’s Deputy Director for Management will chair implementation-focused meetings for the 8 management CAP goals approximately three times a year and OMB’s Deputy Director for Budget will chair meetings for the 7 mission-focused CAP goals, as necessary. OMB staff confirmed that as of August 2015, such meetings had been held for 11 of the 15 CAP goals. OMB staff also told us that regular senior-level meetings, consistent with OMB’s guidance for CAP goal reviews, also took place for 3 additional goals—Cybersecurity, Infrastructure Permitting Modernization, and Insider Threat and Security Clearance. No such reviews have yet taken place for the Climate Change CAP goal, according to OMB staff. OMB also changed the CAP goal governance structure to build capacity for goal implementation. For the interim CAP goal process, each CAP goal was assigned a goal leader from the EOP. For the current CAP goals, in addition to the EOP goal leader, OMB assigned a co-goal leader from key agencies to jointly manage and oversee the goal. For example, the Customer Service CAP goal leaders are OMB’s Associate Director for Personnel and Performance and the Acting Commissioner of the Social Security Administration (SSA). According to OMB, this new governance structure reflects agency leadership and expertise in CAP goal subject areas and more effectively leverages agency resources for crosscutting efforts and to promote greater coordination across multiple agencies. For example, the OMB goal leader for the Customer Service goal told us that it is helpful to have SSA in a leadership role because the agency provides its perspective on implementation efforts related to improving customer service, including piloting new activities before they are implemented government-wide. According to the second quarterly update of fiscal year 2015, SSA helped OMB to launch a pilot for a customer service regional community of practice in Denver, Colorado, to help field staff work across agencies and identify opportunities for joint trainings and joint recruiting efforts. We found that OMB and the PIC have also implemented strategies intended to build agency capacity to work across agencies. These strategies include: Providing ongoing guidance and assistance to CAP goal teams. The seven CAP goal teams we spoke with told us that OMB and the PIC staff are available on a regular basis to provide them with ongoing support, such as assisting with the regular collection of performance data and updating Performance.gov. For example, the Science Technology Engineering and Mathematics (STEM) Education CAP goal leader from the National Science Foundation told us the PIC facilitated meetings to assist the goal team in developing milestones and performance indicators and to define actionable next steps. Developing a template to enhance reporting and management. OMB provided CAP goal teams with a simplified reporting template to use for managing implementation of the goal and to meet quarterly reporting requirements on Performance.gov. According to OMB staff and the seven CAP goal teams we spoke with, the new reporting template makes it easier for goal leaders to review the updates and track progress from one quarter to the next. Piloting a government-wide White House leadership development program. In December 2014, the President announced a White House leadership development program which is designed to provide selected civil servants (i.e., GS-15 level or equivalent) with rotational assignments across agencies to focus on managing CAP goals. According to OMB staff, the program will begin in October 2015 and the participants will spend the next year helping the White House and agencies work on implementing the CAP goals. CAP Goal Teams Reported Some Initial Progress and Are Developing Performance Indicators During the first year of implementation of the current set of CAP goals, CAP goal teams reported initial progress to the extent performance data were available. For example, in its progress update for the second quarter of fiscal year 2015, published in June 2015, the Open Data CAP goal team reported an increase in the use of open government data as indicated by a 25 percent quarterly increase in the number of visits to Data.gov, the government’s platform for publishing its data. The goal team has identified this indicator as a way to measure progress toward one of their goals to fuel economic growth and innovation. However, a few of the CAP goal teams we spoke with told us that in some cases performance data are not always available and developing meaningful performance indicators to assess progress is a challenge. Our June 2014 assessment of the interim CAP goal process found that 6 of the 14 interim CAP goals did not report performance data, in some cases because data needed to assess and report progress toward the goals were unavailable. As a result, we recommended that OMB direct CAP goal leaders to develop plans to identify, collect, and report data necessary to demonstrate progress being made toward each goal. OMB and the PIC updated guidance directing CAP goal teams to establish performance targets and report on any performance indicators that are under development. According to the second quarterly update for fiscal year 2015, 5 of the 7 CAP goals we reviewed have indicators under development for some of their goals. For example, the Customer Service CAP goal team reported that they are developing a standardized performance indicator to measure improvements in citizen satisfaction across government, but the progress update does not provide any information on intermediate deliverables, roles and responsibilities, or time frames for completion. On the other hand, another progress update we examined did include information on steps the CAP goal team is taking to develop a government-wide performance indicator. The STEM Education CAP goal team reported that one of its working groups is developing common evaluation elements to be used across federal agencies, with an expected completion date in early 2016. The goal team also provided information on currently available data, near-term and long-term steps they are taking, and additional research needs. Our June 2014 recommendation remains open because OMB’s updated guidance did not direct CAP goal teams to report on the steps they are taking to develop indicators and associated time frames. We have previously reported that tracking and monitoring progress for cross-agency activities is difficult for a number of reasons such as competing mission priorities, incompatible processes and systems across agencies, resources, and staffing. Given these challenges, when developing performance indicators for government-wide activities, it is important that CAP goal teams provide information on the steps they plan to take to successfully develop meaningful indicators, which will enable them to better track progress over time and hold contributors accountable for implementation. The Executive Branch Has Still Not Taken Action to Assess the Outcomes of Tax Expenditures For over 20 years, we have recommended greater scrutiny of the performance of tax expenditures—reductions in a taxpayer’s tax liability that are the result of special exemptions and exclusions from taxation, deductions, credits, deferrals of tax liability, or preferential tax rates. If the Department of the Treasury’s (Treasury) estimates are summed, approximately $1.2 trillion in revenue was forgone from the 169 tax expenditures reported for fiscal year 2014, nearly the same as discretionary spending that year. In June 1994 and again in September 2005, we recommended that OMB develop a framework for reviewing tax expenditure performance. Periodic reviews could help determine how well specific tax expenditures work to achieve their stated purposes and how their benefits and costs compare to those of spending programs with similar goals. Given the significant investment tax expenditures represent, such reviews could help identify the most effective approaches for achieving results—vital information for federal decision makers in an era of scarce resources. Despite the strong case to evaluate the performance of tax expenditures, OMB has not yet developed a framework for doing so. Fully implementing GPRAMA requirements could provide the foundation of such a framework. GPRAMA requires OMB to identify tax expenditures that contribute to the CAP goals. In addition, since 2012, OMB’s guidance has directed agencies to identify tax expenditures that contribute to their APGs. Our past work reviewing initial GPRAMA implementation in 2012 and 2013 found that OMB and agencies rarely identified tax expenditures as contributors to these goals. As a result, we made several recommendations to improve efforts to identify and assess the contributions of tax expenditures toward executive branch goals. To date, OMB and agencies have taken little action to address these recommendations. OMB has directed agencies, beginning with its 2013 update to its guidance, to identify tax expenditures that contribute to each of their strategic objectives, in response to a recommendation we made in June 2013. However, our work reviewing GPRAMA implementation continues to find that OMB and agencies have not adequately identified the contributions of tax expenditures to CAP goals and APGs. For example, we found in June 2014 that although the goal leader for the Broadband interim CAP goal told us he was aware that tax deductions available to businesses making capital investments contributed to the goal by incentivizing investments in broadband, the deductions were not identified as contributors on Performance.gov. Given their government-wide purview and familiarity with administering the tax code, OMB and Treasury, respectively, are well positioned to assist agencies in identifying tax expenditures that relate to their goals. To that end, OMB’s 2013 and 2014 Circular A-11 guidance noted that it would work with Treasury and agencies to identify where tax expenditures align with their goals and this information was to be published on Performance.gov and included in relevant agency plans, beginning in February 2014. However, as we found in October 2014, according to OMB staff, they did not begin to engage Treasury on this effort until after agency plans were published and the website was updated. OMB staff told us in August 2015 that they had not yet made any progress on this effort. Moreover, OMB removed the language about working with Treasury and agencies to align tax expenditures with agency goals in the June 2015 update to its guidance. Although OMB staff told us they intend to focus on this effort, they did not provide us with any plans or time frames for doing so. We have previously identified additional steps OMB could take to help agencies consider the contributions of tax expenditures to the achievement of their goals. We recommended in our October 2014 report on the federal program inventory required under GPRAMA that OMB should include tax expenditures. The federal program inventory is the primary tool for agencies to identify programs that contribute to their goals, according to OMB’s guidance. By including tax expenditures in the inventory, OMB could help ensure that agencies are properly identifying the contributions of tax expenditures to the achievement of their goals. OMB staff neither agreed nor disagreed with these tax expenditure recommendations, and told us that until they had firmer plans on how program inventory and DATA Act implementation would be merged, they could not determine if implementing these recommendations would be feasible. As previously described, program inventory implementation remains on hold and OMB has not taken any actions to address these recommendations, according to OMB staff. Without including tax expenditures in the inventory, OMB forgoes an important opportunity to increase the transparency of tax expenditures and the outcomes to which they contribute. Ensuring Performance Information Is Useful and Used by Managers Remains a Challenge, but OMB and Agencies Are Implementing Processes That May Lead to Improvements Agencies Continue to Face Challenges in Using Performance Information for Decision Making We have long reported that agencies are better equipped to address management and performance challenges when managers effectively use performance information for decision making. Unfortunately, agencies continue to struggle to do so. Our work has found that federal agencies can use performance information to identify performance improvement opportunities, improve program implementation and organizational processes, and make other important management and resource allocation decisions. However, our recent work shows that agencies continue to have problems effectively using performance information. Our September 2014 report on trends in agencies’ use of performance information compared agencies’ reported use of performance information from our 2007 and 2013 federal managers surveys. To analyze use, we developed a use of performance information index, based on a set of survey questions in both surveys that reflected the extent to which managers reported that their agencies used performance information for various management activities and decision making. The index runs from 1 to 5, where a 1 reflects that managers feel the agency engages “to no extent” and a 5 reflects that managers feel the agency engages “to a very great extent” in the use of performance information activities. Most agencies showed no statistically significant change in use during this period. As shown in figure 4, only two agencies experienced a statistically significant improvement in the use of performance information, while four agencies experienced a statistically significant decline. We have previously reported that in order for performance information to be useful, it must have certain characteristics. Specifically, agencies should ensure that performance information meets various users’ needs for completeness, accuracy, consistency, timeliness, validity, and ease of use. Without complete and reliable performance information, Congress, other decision makers, and stakeholders at all levels of government are hampered in their ability to set priorities, identify improvement opportunities, and allocate resources. Our work over the past 2 years has identified weaknesses in each of the areas that affect the usefulness of performance information. For example, our September 2015 report on affordable rental assistance programs identified an incomplete picture of performance of these programs as a problem. We found that federal, state, and local jurisdictions involved in these efforts reported their performance to varying extents, but that there was incomplete information on their collective performance. Accordingly, we recommended that the Department of Housing and Urban Development (HUD), in consultation with an interagency working group on rental policy, should work with states and localities to develop an approach for compiling and reporting on the collective performance of federal, state, and local rental assistance programs. Treasury and IRS, which are agency members of this working group, did not comment on this recommendation. HUD was concerned that compiling and reporting collective performance information would require significant funding and resources. We continue to believe the overall recommendation is valid. Specifically, we noted that (1) our recommendation is to develop an approach for compiling and reporting such data as a first step, and (2) that our recommendation is purposefully not prescriptive and allows HUD, in consultation with the working group, to design an approach. Additional examples of problems that affect the usefulness of performance information are illustrated in table 1. Agency Implementation of Performance Reviews Should Improve the Use of Performance Information for Decision Making Performance reviews required under GPRAMA and other guidance by their nature promote the use of performance information, as they focus on assessing performance in order to determine progress toward meeting goals and objectives. Data-driven reviews. GPRAMA requires that reviews of progress of agency priority goals (APG) be held at least quarterly; as this requirement is more fully implemented, the use of performance information for decision making should improve. OMB emphasized that frequent, data- driven performance reviews provide a mechanism for agency leaders to use data to assess the organization’s performance, diagnose performance problems and identify improvement opportunities, and decide on next steps to improve performance. These practices are designed to shift the emphasis away from the passive collection and reporting of performance information to a model where performance information is actively used by agency officials to inform decision-making, which is more likely to lead to performance improvements. In our July 2015 report on data-driven reviews, we found that PIOs had reported that the reviews had positive effects on their agencies’ use of performance information. Nearly all of the 22 agencies that reported holding in-person reviews responded that they always or often use their review meetings to assess progress on APGs and to identify goals at risk and strategies for performance improvement. Additionally, as shown in figure 5, nearly all of these agencies also reported that their data-driven review meetings have had a positive effect on progress toward the achievement of their agency’s goals and on their ability to identify and mitigate risks to goal achievement. In our discussions with officials from selected agencies, data-driven review meetings were described as venues for agency leaders and managers to assess progress toward key goals and milestones, the status of ongoing initiatives and planned actions, potential solutions for problems or challenges hindering progress, and additional support or resources needed to improve performance. Agency officials emphasized that discussions in their review meetings tend to focus on those goals or issues most in need of attention, where the achievement of a goal or milestone is at risk. In this way, reviews can serve as early warning systems and facilitate focused discussions on external, technical, or operational obstacles that may be hindering progress and the specific actions that should be taken to overcome them (see sidebar on the following page). Increasing Online Registration through my Social Security The Social Security Administration (SSA) has an APG to increase the number of registrations for its “my Social Security” portal by 15 percent per year in fiscal years 2014 and 2015. However, we reported in July 2015 that during SSA’s 2014 third quarter review meeting, it became apparent to SSA leadership that the agency was not on track to achieve its target for this goal. SSA shifted focus to what could be done by offices throughout the agency to support efforts to increase the number of registrations using currently available or attainable resources and technology. To achieve this, SSA leadership had different offices within the agency specify the contributions they would make to help increase the number of registrations. Since then, the agency’s quarterly review meetings have been used to review and reinforce the commitments each office made. While SSA was unable to meet the registration goal for fiscal year 2014, according to SSA officials, these efforts recently undertaken as a result of the review process have helped generate an increase in registrations. Data from SSA’s fiscal year 2015 first quarter review show that there was a 46 percent increase in new account registrations in October 2014 compared to the number of new registrations in October 2013, and a 26 percent increase in December 2014 relative to December 2013. performance reviews. Reexamining the benefits and costs achieved after a regulation is implemented could provide useful data for these reviews. Despite the potential to leverage retrospective review information, agencies reported mixed experiences linking retrospective analyses to APGs. Agencies typically selected rules to review based on criteria such as the number of complaints or comments from regulated parties and the public. Including whether a regulation contributed to an APG as one of these criteria would help agencies prioritize retrospective analyses that could contribute useful information to APG assessments. We recommended that OMB’s Office of Information and Regulatory Affairs direct in guidance that agencies take actions to ensure that contributions made by regulations toward the achievement of APGs are properly considered and improve how retrospective regulatory reviews can be used to help inform assessments of progress toward these APGs. OMB staff agreed with this recommendation and stated that the agency was working on strategies to help facilitate agencies’ ability to use retrospective reviews to inform APGs, but as of June 2015 they have not provided additional details on their actions. Strategic reviews. Like data-driven reviews of APGs, and retrospective reviews of regulations, agencies’ annual strategic reviews also have potential to increase the use of performance information. For example, we reported in July 2015 that, to ensure effective strategic reviews, participants should use relevant performance information and evidence to assess whether strategies are being implemented as planned and whether they are having the desired effect, and to identify areas where action is needed to improve or enhance implementation and impact. Where progress in achieving an objective is lagging, the reviews are intended to identify strategies for improvement, such as strengthening collaboration to better address crosscutting challenges, building skills and capacity, or using evidence to identify and implement more effective program designs. Strategic reviews can also be used to identify any evidence gaps or areas where additional analyses of performance data are needed to determine effectiveness or to help set priorities. For example, we reported that for the Department of Homeland Security’s (DHS) goal to safeguard and expedite lawful trade and travel, officials determined that sufficient progress was being made but identified gaps in monitoring efforts, such as a lack of performance measures related to travel. As a result, DHS officials are taking steps to develop measures to address the gaps. Other Leading Practices and Evidence-Based Tools Also Have the Potential to Increase Use of Performance Information We have in the past identified leading practices, such as demonstrating management commitment, that can enhance and facilitate the use of performance information. Our recent periodic survey of federal managers found that specific practices were related to greater use of performance information. As described previously, we developed a use of performance information index, composed of questions from the 2007 and 2013 surveys, to analyze responses to our surveys of federal managers. We used statistical testing to determine if the relationship between additional survey questions, shown in figure 6, from the 2013 survey and an agency’s use of performance information index was statistically significant. We found that an agency’s average use of performance information index score increased when managers reported their agencies engaged to a greater extent in these practices as reflected in the survey questions. The questions that were statistically and positively related to the use of performance information index are also shown in figure 6. For example, we found that the strongest driver of the use of performance information was whether federal managers had confidence in that information’s validity. A greater focus on these practices may help agencies improve their use of performance information. Prompted by our work, several agencies—including the Departments of the Treasury and Labor, the National Aeronautics and Space Administration, and the Nuclear Regulatory Commission—asked us to provide them with underlying data for their agencies from the 2007 and 2013 managers’ surveys, so that they could conduct additional analyses of their agencies’ use of performance information. Some of the practices reflected in these questions are ones that we have identified elsewhere in our work as important. For example, demonstrated leadership commitment is an area we have emphasized in our work on government operations we identify as high risk. Our high-risk program serves to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Our experience with the high-risk list over the past 25 years has shown that one of the key elements needed to make progress in high-risk areas is demonstrated strong commitment and top leadership support. Additionally, providing, arranging, or paying for training may also be related to employee engagement. In July 2015, we reported that career development and training—as measured by the Federal Employee Viewpoint Survey question “I am given a real opportunity to improve my skills in my organization”—is one of the six practices that are key drivers of employee engagement. In addition, evidence-based tools—such as program evaluations and “pay for success” funding mechanisms—can also facilitate the use of performance information. Program evaluations. Our recent work on program evaluations— systematic studies of program performance—found that agencies have varying levels of evaluation capacity. OMB has encouraged agencies to strengthen their program evaluations and expand their use in management and policy making, but our 2014 examination of agencies’ ability to conduct and use program evaluations found it to be uneven. As part of our work, we surveyed and received responses from the PIOs at the 24 CFO Act agencies. About half (11) of the 24 agencies reported committing resources to obtain evaluations by establishing a central office responsible for evaluating agency programs, operations, or projects; on the other hand, 7 reported having no recent evaluations for any of their performance goals. Although agencies may not have many evaluations, more than a third reported using them to a moderate to a very great extent to support several aspects of program management and policy making. While agency program evaluation capacity is mixed, some agencies reported increasing use of evaluations and capacity-building activities after GPRAMA was enacted. About half of agencies reported increasing their use of evaluations for various activities, as shown in figure 7, since GPRAMA was enacted. Additionally, half of the PIOs we surveyed reported that efforts to improve their capacity to conduct credible evaluations had increased at least somewhat over this time. Our work found that implementing certain GPRAMA requirements are among the reported actions agencies can take to improve their capacity to conduct evaluations and make use of evaluation information. About two-thirds of agencies (15) reported hiring staff with research analysis and expertise, and nearly half (11) reported that doing so was useful for improving agency capacity to conduct credible evaluations. Additionally, about half of PIOs reported that conducting data-driven reviews of APGs and holding goal leaders accountable for progress on APGs, both of which are required under GPRAMA, were moderately to very useful for improving agency capacity to make use of evaluation information in decision making. Engaging program staff was also rated very useful. Furthermore, in our June 2013 report on strategies to facilitate agencies’ use of evaluation, we identified three strategies to facilitate the influence of evaluations on program management and policy: demonstrating leadership support of evaluation for accountability and program improvement; building a strong body of evidence; and engaging stakeholders throughout the evaluation process. Pay for Success. Another evidence-based tool that promotes the use of performance information is Pay for Success (PFS), also known as Social Impact Bonds. PFS is a new contracting mechanism to fund prevention programs, where investors provide capital to implement a social service, for example, to reduce recidivism by former prisoners. If the service provider achieves agreed upon outcomes, the government pays the investor, usually with a rate of return, based on savings from decreased use of more costly remedial services, such as incarceration. In September 2015, we reported that stakeholders from the 10 PFS projects we reviewed said that PFS offers potential benefits to all parties to the project. For example, governments can implement prevention programs that potentially lead to reduced spending on social services and transfer the risk of failing to achieve outcomes to investors. Figure 8 shows the roles of organizations involved in PFS projects. PFS emphasizes the use of performance information because the government contracts for specific performance outcomes and generally includes a requirement that a program’s impact be independently evaluated. While the structures of the PFS examples we reviewed in our September 2015 report varied, stakeholders we interviewed reported that PFS oversight bodies established in the projects’ contracts regularly reviewed performance data during service delivery. Additionally, stakeholders told us that intermediaries and investors can bring performance management expertise to service providers and provide a rigorous focus on performance management and accountability. For example, an official we interviewed from one service provider noted that her organization invested in data entry and data analyst positions and has a team that collects, analyzes, and processes data that it submits to the intermediary. As federal agencies consider expanding their involvement in PFS, it becomes increasingly important for officials at all levels of government to collaborate to share knowledge and experiences. We found that while the federal government could play a role in addressing challenges in implementing PFS at the state and local levels of government, a formal means to collaborate and share lessons learned does not exist. We recommended in our September 2015 report that OMB establish a formal means for federal agencies to collaborate on PFS. Having such a mechanism as the field grows would allow agencies to leverage the experience of early federal actors in the PFS field and would decrease the potential for missteps in developing projects due to information gaps and failure to learn from experience with this evolving tool of government. OMB concurred with this recommendation and is working with agencies to explore options for continued collaboration on PFS. Agencies Continue to Face Challenges Linking Individual and Agency Performance to Results Goal Leader Designations and Participation in Performance Reviews Have Positive Effects, but Agencies Are Missing Opportunities to Further Strengthen Performance and Accountability Our previous work has highlighted the importance of creating a “line of sight” showing how unit and individual performance can contribute to overall organizational goals. At the individual level, an explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems. This link reinforces employee engagement and accountability for achieving goals. GPRAMA and related guidance provide several mechanisms that can help individuals and agencies see this connection and help them contribute to agency and government-wide goals. Goal Leader Designation, Performance Reviews, and Other Factors Have Positive Effects GPRAMA requires agencies to identify an individual—the goal leader— responsible for the achievement of each APG, and related OMB guidance more recently directed agencies to identify a deputy goal leader to support each goal leader, as we had recommended in our July 2014 report on the role of the agency priority goal leader. Additionally, data- driven reviews required under the act offer the opportunity to hold responsible officials, such as the goal leaders, personally accountable for addressing problems and identifying strategies for improvement. Agency priority goal leaders. Our July 2014 review of the agency priority goal leader role found that most of the 46 goal leaders we interviewed thought the goal leader designation had positive effects, including providing accountability. Other benefits goal leaders identified as resulting from the position include that it provided greater visibility for the goal, facilitated coordination, heightened focus on the goal, and improved access to resources. Goal leaders told us that several mechanisms promote accountability for goal progress, including personal reputation and accountability to agency and other leadership. For example, the Assistant Secretary of Labor for Occupational Safety and Health, who was the goal leader for two APGs for 2012 and 2013 (Reduce Worker Fatalities and Develop a Model Safety and Return-to- Work Program), told us that the interest of Congress and the Department of Labor’s Inspector General, in their respective oversight roles, both operated to hold him accountable. Deputy goal leaders supported day-to-day goal management and provided continuity during times of goal leader transition. However, we found that nearly a quarter (11 of 46) of the goal leaders we interviewed had not assigned an official to the deputy goal leader position, a designation that provides clear responsibility and additional accountability for goal achievement. Because of the importance of this position, we recommended that OMB work with agencies to ensure that they appointed a deputy goal leader to support each agency priority goal leader. In response to our recommendation, in April 2015, the Director of OMB issued a memorandum stating that, in addition to identifying a goal leader for each APG, agencies must identify a senior career leader to support implementation. OMB also updated its 2015 Circular A-11 guidance to reflect this requirement. Data-driven and strategic reviews. Our work has found that regular, in- person data-driven reviews are an effective mechanism for holding goal leaders and other goal contributors individually accountable for goal progress. In our July 2015 examination of data-driven reviews, we reported that 22 agencies we surveyed reported holding in-person data- driven reviews. Of these, 21 reported that their data-driven reviews have had a positive effect on their agency’s ability to hold goal leaders and other officials accountable for progress toward goals and milestones. According to officials from selected agencies, the transparency of performance information and a review process that ensures it receives appropriate scrutiny produce an increased sense of personal accountability for results. For example, officials from the Department of Commerce told us that they are using their regular review meetings with bureau heads and goal leaders to support a cultural change throughout the agency and reinforce accountability for performance at multiple levels of the organization. NASA Officials Report that Strategic Reviews Encourage Accountability We reported in July 2015 that the National Aeronautics and Space Administration’s (NASA) experience reviewing strategic objectives illustrates their potential for promoting accountability. Agency officials told us that their chief operating officer (COO) determines final ratings for strategic objectives during a briefing attended by the agency’s performance improvement officer, strategic objective leaders, and relevant program staff. According to NASA officials, the personal involvement of the COO encouraged accountability for results and performance improvements. Similar to data-driven reviews, our work on agency strategic reviews noted that they also have potential for promoting individual accountability for organizational results. In our July 2015 report, in which we identified and illustrated practices that facilitate effective strategic reviews, we reported that accountability for results is one of the key features for planning reviews. Specifically, we stated that the focus of accountability should be on the responsible objective leader’s role in using evidence to credibly assess progress in achieving strategic objectives. Agency leaders should hold objective leaders and other responsible managers accountable for knowing the progress being made in achieving outcomes and, if progress is insufficient, understanding why and having a plan for improvement. If evidence is insufficient for assessing progress, managers should be held accountable for improving the availability and quality of the evidence so that it can be used effectively for decision making. Managers should also be held accountable for identifying and replicating effective practices to improve performance (see sidebar). Employee engagement. Research on both private- and public-sector organizations has found that increased levels of engagement—generally defined as the sense of purpose and commitment employees feel toward their employer and its mission—can lead to better organizational performance. Our July 2015 report on employee engagement identified specific practices that drive employee engagement. Specifically, our regression analysis of selected Federal Employee Viewpoint Survey (FEVS) questions identified six practices as key drivers of employee engagement, as measured by OPM’s Employee Engagement Index. These practices are detailed in figure 9. Of these six, having constructive performance conversations is the strongest driver of employee engagement. Performance appraisal systems, which include performance plans, are a powerful mechanism for promoting alignment with and accountability for organizational goals. There are several benefits to aligning performance with results, including increased use of performance information. As shown in the textbox, our work has found problems with oversight and accountability in the Department of Veterans Affairs’ (VA) Health Care System. In response to these and other problems, Congress has taken action, such as passing the Veterans Access, Choice, and Accountability Act of 2014, to hold senior VA leadership accountable for performance and is considering other means of increasing accountability. Inadequate Oversight and Accountability in VA’s Health Care System Despite substantial budget increases in recent years, for more than a decade there have been numerous reports—by GAO, VA’s Office of the Inspector General, and others—of VA facilities failing to provide timely health care. In some cases, the delays in care or VA’s failure to provide care at all have reportedly resulted in harm to veterans. These and other serious and long-standing problems with the timeliness, cost- effectiveness, quality, and safety of veterans’ health care led to our designation of VA’s health care system as a high-risk area in 2015. To facilitate accountability for achieving its organizational goal of ensuring that veterans have timely access to health care, VA included measures related to wait times for primary and specialty care appointments (1) in the performance contracts for senior leaders and (2) in the agency’s annual budget submissions and performance and accountability reports. However, we found that data used to monitor performance on these measures were unreliable and that inconsistent implementation of VA’s scheduling policies may have resulted in increased wait times or delays in scheduling outpatient medical appointments at VA facilities. Scheduling staff in some locations told us that they had changed desired dates for medical appointment to show that wait times were within VA’s performance goals. The VA Office of the Inspector General has published reports with similar findings. VA has since announced that it has modified its performance measures that relate to wait times and removed measures related to wait times from senior leaders’ performance contracts. Goal leaders’ Senior Executive Service performance plans. Although goal leaders told us that the designation provides accountability, we found their Senior Executive Service (SES) performance plans generally did not reflect their responsibility for goal achievement. As part of our work on the role of the agency priority goal leader in July 2014, we reviewed the performance plans of all of the goal leaders and deputy goal leaders for the 47 APGs in our sample, where applicable. These performance plans covered a range of responsibilities, but many did not reference the APGs for which the goal leaders and deputies were responsible. Additionally, the vast majority (all but one of the 32 goal leader plans and one of the 35 deputy goal leader plans) failed to link performance standards to goal outcomes. Failing to fully reflect goal achievement in performance plans is a missed opportunity to ensure that goal leaders and deputies are held accountable for goal progress and to reinforce links. Because APGs by definition reflect the highest priorities of each agency, accountability for goal achievement is especially important. To ensure goal leader and deputy goal leader accountability, we recommended that the Director of OMB work with agencies to ensure that goal leader and deputy goal leader performance plans demonstrate a clear connection with APGs. As of June 2015, OMB had not yet taken action in response to this recommendation. Senior Executive Ratings. Our recent work has also raised questions about agency processes for rating senior executive performance, which can promote alignment with and accountability for organizational goals. For our January 2015 report on SES ratings and performance awards, we reviewed performance award data from the 24 CFO Act agencies and we examined performance appraisal systems at five case study agencies. Specifically, we looked at the performance appraisal system that the Office of Personnel Management (OPM) and other agency representatives developed in 2012. This system is intended to provide a more consistent and uniform framework for SES evaluation. We found that the five agencies we studied in detail had all linked SES performance plans with agency goals, a key practice for effective performance management systems, and a feature that promotes the line of sight between individual performance and organizational goals. Disparities in Agencies’ Executive Ratings Distributions We reported in January 2015 that there is wide variation in SES ratings distributions among agencies. For example, in fiscal year 2013 the Department of Defense rated 30.6 percent of its SES employees at the highest rating level, while the Department of Justice rated 73.6 percent of its SES employees at this level. As we have previously reported, one of the key practices in promoting a line of sight between individual performance and organizational goals is making meaningful distinctions in performance. However, although one of the primary purposes for establishing the new appraisal system included increasing equity in ratings across agencies, we found disparities in rating distributions and pay (see sidebar). This disparity in ratings between agencies raises questions about whether agencies are consistently applying performance definitions and whether performance ratings are meaningful. We recommended that the Director of OPM, which certifies—with OMB concurrence—SES performance appraisal systems, should consider the need for refinements to the performance certifications guidelines addressing distinctions in performance and pay differentiation. OPM partially concurred with the recommendation, though we maintain that additional action should be considered to ensure equity in ratings and performance awards across agencies. As of June 2015, OPM officials said that they had convened a cross-agency working group that developed several recommendations that are intended to make agencies’ justifications for high SES ratings more transparent. Senior executives’ use of performance information for decision making. Aligning SES performance with results is a key feature of effective performance management, and our recent work has found that it also may promote use of performance information. As we found in our September 2014 report on trends in the use of performance information, managers’ responses to a question we asked them on aligning an agency’s goals, objectives, and measures was significantly related to the use of performance information, controlling for other factors. Specifically, an increase in the extent to which managers aligned performance measures with agency-wide goals and objectives was associated with an increase on the five-point scale we used for our use index. However, our analysis also found that there was a gap between SES and non-SES managers in reported use of performance information. SES managers government-wide and at nine agencies scored statistically significantly higher than the non-SES managers at those agencies. As shown in figure 10, SES and non-SES managers from the Departments of Homeland Security and Veterans Affairs had the largest gap in use of performance information between their SES and non-SES managers. Agencies Have Long- standing Difficulties in Measuring Performance of Selected Program Types A critical element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance and to measure progress toward these goals. GPRAMA reinforces the need to set meaningful goals by directing agencies to establish a balanced set of performance measures, such as output, outcome, customer service, and efficiency, across program areas. Agencies have been responsible for measuring program outcomes since GPRA was enacted in 1993, but still have difficulty developing and using performance measures. As we reported nearly 20 years ago, performance measures should demonstrate to each organizational level how well it is achieving its goals. As shown in the illustrations in the textbox, however, agencies continue to make insufficient progress in establishing and using outcome-oriented performance measures. Examples of Agency Difficulties in Developing and Using Outcome Measures Outcome-Oriented Metrics and Goals Are Needed to Gauge DOD’s and VA’s Progress in Achieving Interoperability of Electronic Health Records Systems The Departments of Defense (DOD) and Veterans Affairs (VA) operate two of the nation’s largest health care systems, serving approximately 16 million veterans and active duty service members and their beneficiaries, at a cost of more than $100 billion a year. With guidance from the Interagency Program Office (IPO) that is tasked with facilitating the departments’ efforts to share health information, the two agencies have taken actions to increase interoperability between their electronic health record systems. Developing electronic health records is particularly important for optimizing the health care provided to military personnel and veterans, as they and their families tend to be highly mobile and may have health records residing at multiple medical facilities. In August 2015, we reported that the IPO had taken steps to develop process metrics intended to monitor progress of these efforts, but had not yet specified outcome-oriented metrics or established related goals that are important to gauging the impact that interoperability capabilities have on improving health care services for shared patients. Using outcome-based metrics could provide DOD and VA a more accurate, ongoing picture of their progress toward achieving interoperability and the value and benefits generated. We recommended that DOD and VA, working with the IPO, establish a time frame for identifying outcome-oriented metrics; define related goals to provide a basis for assessing and reporting on the status of interoperability; and update IPO guidance to reflect the metrics and goals identified. DOD and VA concurred with our recommendations. Measuring Progress in Addressing Incarceration Challenges The federal inmate population has increased more than eight-fold since 1980, and the Department of Justice (DOJ) has identified prison crowding as a critical issue since 2006. In June 2015, we reported that DOJ had implemented three key initiatives to address the federal incarceration challenges of overcrowding, rising costs, and offender recidivism. The department had several early efforts underway to measure the success of these initiatives, but we concluded that its current approach could be enhanced. For example, the Clemency Initiative is intended to encourage federal inmates who meet criteria that DOJ established to apply to have their sentences commuted (reduced) by the President. DOJ tracked some statistics related to this initiative, such as the number of petitions received and the disposition of each, but it did not track how long, on average, it took for petitions to clear each step in its review process. Such tracking would help DOJ identify processes that might be contributing to any delays. Without this tracking, DOJ cannot be sure about the extent to which the additional resources it is dedicating to this effort are helping to identify inmate petitions that meet DOJ’s criteria and expedite their review. We recommended that the Attorney General direct the Office of the Pardon Attorney to (1) track how long it takes, on average, for commutation of sentence petitions to clear each step in the review process under DOJ’s control, and (2) identify and address, to the extent possible, any processes that may contribute to unnecessary delays. DOJ concurred with the recommendation and stated that it would consider our findings and recommendations during the course of its ongoing efficiency reviews. Measuring Effectiveness of Military Sexual Assault Prevention Efforts Our recent work has identified issues in establishing goals and metrics to measure the effectiveness of efforts to reduce incidents of sexual assault in the military, which according to the Department of Defense (DOD) represent a significant and persistent problem within the department. For example, in March 2015, we reported that DOD had not established goals or metrics to gauge sexual assault-related issues for male service members. DOD’s Sexual Assault Prevention and Response Office had three different general officers in the director position since 2011. Given this high level of turnover, we stated that establishing goals and metrics is key to institutionalizing efforts to address sexual assault of male service members. We recommended that DOD develop clear goals and associated metrics to drive the changes needed to address sexual assaults of males and articulate these goals. DOD agreed with this recommendation. Measuring the performance of different program types—such as grants, regulations and tax expenditures—is a significant and long-standing government-wide challenge and one we have addressed in our previous work. In our June 2013 report on initial GPRAMA implementation, we also reported that agencies have experienced common issues in measuring various types of programs. We recommended that the Director of OMB work with the PIC to develop a detailed approach to examine these difficulties, including identifying and sharing any promising practices. Additionally, our July 2014 report on the role of the agency priority goal leader noted that several APGs we examined identified certain program types, such as grants, as key contributors to their goals. However, goal leaders and their deputies lacked the means to identify and share information with other goal leaders who were facing similar challenges or were interested in similar topics. We recommended that the Director of OMB work with the PIC to further involve agency priority goal leaders and their deputies in sharing information on common challenges and practices related to APG management. OMB and PIC staff told us in June 2015 that they have taken some actions to facilitate information sharing on common topics. For example, the PIC developed a law enforcement working group, which aims to address challenges in measuring law enforcement functions. Despite these steps, additional actions are needed to fully implement these recommendations and address this long- standing issue. We will continue to monitor OMB’s and the PIC’s efforts. Illustrative examples from our recent work that show how agencies need to make better progress in measuring certain program types are provided in table 2. One program type—direct service—is one of the areas in which our recent work has highlighted problems with agencies’ performance measurement in multiple agencies. Our October 2014 report on customer service standards examined how selected agencies are using customer service standards and measuring performance against those standards. We reviewed the customer services standards for six federal programs and compared them to key elements of effective customer services standards, which we identified based on our review of GPRAMA and executive orders that focused on providing greater accountability, oversight, and transparency. Two of the key elements of customer services standards are that they (1) include targets or goals for performance, and (2) include performance measures. We found that three of the six programs did not have customer services standards that met these two elements. For example, we reported that because the National Park Service (NPS) did not have performance goals or measures directly linked to those goals, the agency is unable to determine the extent to which the standards are being met agency-wide or strategies to close performance gaps. We made several recommendations related to improving the NPS’s and other agencies’ customer service standards, including that the Department of the Interior (of which NPS is a part) to ensure NPS standards include (1) performance targets or goals, and (2) performance measures. In July 2015, NPS officials reported that they had made plans to implement these recommendations. Additionally, OMB is focusing on improving the federal government’s customer service by developing a related CAP goal. According to information on Performance.gov, as part of its work on the Customer Service CAP goal, the administration is working to streamline transactions, develop standards for high impact services, and utilize technology to improve the customer experience. We will be assessing OMB’s progress in implementing this CAP goal as part of our ongoing review. OMB and Agencies Have Not Clearly Communicated Key Performance Information, but More Effective Implementation of GPRAMA Requirements Would Improve Transparency To operate as effectively and efficiently as possible and to make difficult decisions to address the federal government’s fiscal and performance challenges, Congress, the administration, and federal managers must have ready access to reliable and complete financial and performance information—both for individual federal entities and for the federal government as a whole. However, in our work since 2013 we have identified areas in which agencies have not clearly reported information related to billions of dollars in government spending (see textbox). Examples of Agencies Not Clearly Reporting Information on Government Spending Agencies Fail to Properly Report Over $600 Billion in Assistance Awards The Federal Funding Accountability and Transparency Act (FFATA) was enacted in 2006 to increase the accountability and transparency over the more than $1 trillion spent by the federal government on contracts, grants, loans, and other awards annually. The act required OMB to establish a website that contains data on federal awards and guidance on agency reporting requirements for the website, USASpending.gov. The website is to promote transparency in government spending by providing the public with the ability to track where and how federal funds are spent. However, we reported in June 2014 that although agencies generally reported required contract information, they did not properly report information on assistance awards (e.g., grants or loans), totaling approximately $619 billion in fiscal year 2012. In addition, we found that few awards on the website contained information that was fully consistent with agency records. We estimated with 95 percent confidence that between 2 and 7 percent of the awards contained information that was fully consistent with agencies’ records for all 21 data elements examined. We concluded that without accurate data, the usefulness of USASpending.gov will be hampered. To improve the reliability of information on USASpending.gov, we recommended that OMB (1) clarify guidance on reporting award information and maintaining supporting records, and (2) develop and implement oversight processes to ensure that award data are consistent with agency records. OMB generally agreed with our recommendations but, as of August 2015, had not taken actions to address them. Full implementation of the DATA Act, which amended FFATA and which OMB is currently working on, may address these recommendations. USDA Performance Reporting Does Not Reflect Effects of Approximately $3 Billion in Spending on Broadband Access to affordable broadband Internet is seen as vital to economic growth and improved quality of life, yet deployment in rural areas tends to lag behind urban and suburban areas. The American Recovery and Reinvestment Act of 2009 (Recovery Act) appropriated funding for the Broadband Initiatives Program (BIP), a Department of Agriculture (USDA) Rural Utilities Service (RUS) program to fund broadband projects primarily to serve rural areas. However, we reported in June 2014 that RUS has reported limited information on BIP’s impact since awarding funds to projects, and that BIP results are not tracked in USDA’s annual performance reporting. As a result, RUS has not shown how much the program’s approximately $3 billion in project funding has affected broadband availability. GPRAMA directs agencies to establish performance goals in annual performance plans and to report on progress made toward these goals in annual performance reports. However, USDA did not update or include BIP results as compared to the related performance goals in its annual performance reports. We concluded that without an updated performance goal and regular information reported on the results of BIP projects, it is difficult for USDA, RUS, and policymakers to determine the impact of Recovery Act funds or BIP’s progress on improving broadband availability. We recommended that the Secretary of Agriculture include BIP performance information as part of USDA’s annual performance plan and report by comparing actual results achieved against the current subscribership goal. USDA agreed with our recommendation, and stated that it planned to modify its next annual performance plan and report to include the number of subscribers receiving new and improved service as a result of the program. Our work has also identified other problems with transparency. As described in the textbox below, only one of the six federal services for which we reviewed customer service standards had standards that were easily publicly available. Most Agencies Reviewed Did Not Make Customer Service Standards Easily Publicly Available Our recent work has also found issues with transparency related to agencies’ customer service standards. In October 2014 we identified key elements of customer service standards—which should inform customers as to what they have a right to expect when they request services—that would allow agencies to better serve the needs of their customers by providing greater accountability, oversight, and transparency. One of the elements that we identified is that customer service standards be easily publicly available. Easily available standards help customers know what to expect, when to expect it, and from whom. As part of our work, we assessed the extent to which customer service standards at six services within five federal agencies (including two services within one of those agencies) included key elements, including easily publicly accessible standards. We found that only one of these services had standards that were easily available to the public. That service—Customs and Border Protection (CBP) inspection of individuals—posts its standards on its website as well as at points of service in entry ports, field offices, and headquarters, according to CBP officials. The other five services, we found, did not make their standards easily accessible to the public. For example, we had reported in 2010 that the Forest Service did not make its customer service standards available to its customers because officials felt that the standards would not be helpful to the visitors who evaluate such things as the cleanliness of restrooms against their own standards and not those set forth by the Forest Service. In 2014, Forest Service officials told us that there has been no change since 2010. However, according to executive orders and guidance, standards are specifically intended to inform the public, and should be publicly available. We recommended that the Department of Agriculture (of which the Forest Service is a part) ensure that the Forest Service’s standards are easily publicly available, among other things. In addition, we made recommendations to the other five services that had not made their standards easily accessible to the public. Although GPRAMA requirements have the potential to increase transparency of performance information, we have found mixed progress in implementing these requirements. Program inventories. GPRAMA’s requirements for program inventories have the potential to improve transparency of performance information, but, as previously described, our October 2014 report identified several issues that affect these inventories’ usefulness. For example, although GPRAMA requires agencies to describe each program’s contribution to the agency’s goals, we found instances where agencies omitted that information. Ensuring agencies illustrate this alignment would better explain how programs support the results agencies are achieving. As stated earlier, OMB has put plans for updating the inventories on hold, in part due to the enactment of the DATA Act, which is intended to increase accountability and transparency in federal spending by requiring agencies to publicly report information about any funding made available to, or expended by, an agency. As noted in our July 2015 testimony on DATA Act implementation, effective implementation of both the DATA Act and GPRAMA’s program inventory provisions, especially the ability to crosswalk spending data to individual programs, could provide vital information to assist federal decision makers in addressing the significant challenges the government faces. We identified a potential approach OMB could take in merging program inventory efforts with DATA Act implementation. That is, OMB could explore ways to improve the comparability of program data by using tagging or similar approaches that allow users to search by key words or terms and combine elements based on the user’s interests and needs. This merging could help ensure consistency in the reporting of related program-level spending information. As mentioned previously, OMB does not expect an update of program inventories to happen before May 2017. Other planned changes to the program inventories could also improve the transparency of their information. For example, OMB staff told us that they also planned to present the 24 program inventories during the planned May 2014 update in a more dynamic, web-based format. This approach, too, has been put on hold. A web-based approach could make it easier to tag and sort related or similar programs. For instance, OMB plans to have agencies tag each of their programs by one or more program type in a future iteration of the inventory to provide a sorting capability for identifying the same type of program. By providing a sorting mechanism by program type, OMB could help address one of our open recommendations, described previously, that OMB work with the PIC to develop a detailed approach to examine common, long-standing difficulties agencies face in measuring the performance of various types of federal programs and activities. A sorting mechanism could help by identifying (1) all programs in a given type, and (2) of those programs, any of which have developed strategies to effectively overcome measurement challenges. Additionally, in its guidance for the 2014 update before it was put on hold, OMB intended for agencies to link each program to the existing web pages on Performance.gov for strategic goals, strategic objectives, APGs, and CAP goals. According to OMB staff, once they move forward with the next inventory update and move to a web-based presentation, users will be able to sort programs by the goals to which they contribute. This approach also would allow users to identify programs that contribute to broader themes on Performance.gov. The themes generally align with budget functions from the President’s Budget and include administration of justice; general science, space, and technology; national defense; and transportation, among others. Currently, the themes can be used to sort goals on Performance.gov that contribute to those broad themes. Major management challenges. Another area in which our work has identified problems with transparency and communication of performance information is related to the GPRAMA requirement that agencies report in their annual performance plans key performance information related to their major management challenges, including performance goals, milestones, indicators, and planned actions that they have developed to address such challenges. Major management challenges include programs or management functions, within or across agencies, that have greater vulnerability to fraud, waste, abuse, and mismanagement, such as those issues identified by GAO as high risk, where a failure to perform well could seriously affect an agency’s or the government’s ability to achieve its mission or goals. We have ongoing work, which we plan to issue in late 2015, which is examining how federal agencies are addressing their major management challenges. As of August 2015, we found that agencies generally did not report key performance information about their major management challenges in their annual performance plans and reports in a transparent manner. For example, while some agencies told us that they had internal plans for addressing their major management challenges, 12 of 24 agencies that issued agency performance plans or similar documents for fiscal year 2015 did not publicly report planned actions for addressing their major management challenges. While the reasons for why agencies did not report complete information varied, such as readability and redundancy with other similar topics in the performance plan, agencies told us that OMB’s guidance appeared to give them flexibility on what information they needed to report. We will provide updated information on major management challenges in our forthcoming report. CAP goals. Another area in which transparency is important is in communicating progress on performance goals, but our June 2014 report on CAP goal reviews found that the quarterly updates for the 14 interim CAP goals did not always provide a complete picture of progress. For each of the CAP goals, GPRAMA requires OMB to coordinate with agencies to establish annual and quarterly performance targets and milestones and to report quarterly the results achieved compared to the targets. The updates we reviewed were inconsistent, and some were missing key performance information, such as performance targets, milestone due dates, and key contributors to the goals, that was needed to track progress toward the goals. In one case, we were told that the data needed to track progress toward a goal were not available. Staff from the Real Property interim CAP goal team told us that they did not have data available for tracking progress toward the goal of holding the federal real property footprint at its fiscal year 2012 baseline level. In addition, we found that in some cases information on the organizations and program types that contributed to an interim CAP goal, such as relevant tax expenditures, was missing. We concluded that the incomplete information in many of the updates provided a limited basis for ensuring accountability for progress toward targets and milestones for those interim CAP goals and recommended that OMB take a number of actions to ensure that all key contributors were identified and that quarterly and overall progress toward CAP goals could be fully reported. These included identifying all key contributors to the achievement of the goals; identifying annual planned levels of performance and quarterly targets for each of the goals; developing plans to identify, collect, and report data necessary to demonstrate progress being made toward each of the goals or developing an alternative approach for tracking and reporting on progress quarterly; and reporting the time frames for the completion of milestones, the status of milestones, and how milestones are aligned with strategies or initiatives that support the achievement of each goal. As described previously, OMB has increased its emphasis on CAP goal governance for the current set of CAP goals, and OMB has taken actions to address concerns our work raised about CAP goal reviews. One of the actions OMB has taken, together with the PIC, was to develop revised guidance, in the form of a template, for CAP goal teams to use to report quarterly progress updates for these goals. This template responded to three of our recommendations related to CAP goal progress reporting by including a section for the CAP goal teams to identify programs that contribute to their goals; directing the teams to list targets for the key indicators they use to track progress; and directing the teams to establish work plans with a list of specific milestones that should include milestone due dates and information on milestone status. The template also indicated that goal teams can organize milestones by each identified sub-goal, aligning specific activities with the objectives to which they contribute. In addition, the PIC provided guidance in January 2015 that further addressed two of our recommendations. The guidance directs CAP goal teams to report all agencies, organizations, programs, activities, regulations, tax expenditures, policies, and other activities that contribute to each goal. It also specifically notes that GPRAMA requires the teams to report on performance against targets and states that quarterly progress updates should identify areas where progress has exceeded expectations or been slower than expected or where targets for performance measures have been missed. The actions that OMB and the PIC have taken to address our recommendations have helped to improve the transparency of the CAP goal progress updates. For example, nearly all of the quarterly updates released in June 2014 for the second quarter of fiscal year 2014 included milestone due dates and information on their status. Many (8 of 15) of the lists of milestones aligned with specific sub-goals. Quality of performance information. GPRAMA requirements for reporting on the quality of performance information also have the potential to increase transparency, as they require agencies to publicly report on how they are ensuring the accuracy and reliability of the performance information they use to measure progress toward APGs and performance goals. Specifically, for each APG, agencies must provide information addressing five requirements to OMB for publication on Performance.gov. Additionally, agencies must address all five requirements for performance goals, which include APGs. Our September 2015 report on the quality of publicly reported performance information found limited information on Performance.gov on the quality of performance information used to assess progress on six selected agencies’ 23 APGs. In response to our review, OMB updated its A-11 guidance in June 2015 to direct agencies to either provide this information for publication on Performance.gov on how they are ensuring the quality of performance information for their APGs, or provide a hyperlink from Performance.gov to an appendix in their performance report that discusses the quality of their performance information. OMB staff stated that this information will likely not be available until agencies start reporting on the next set of APGs (for fiscal years 2016 and 2017). This is because OMB will need to update a template that agencies complete for their Performance.gov updates. Further, the agencies we reviewed generally did not describe how they addressed all five requirements for their individual APGs in their performance plans and reports. While all six agencies described how they ensured the quality of their performance information overall, we found that only DHS’s performance plans and reports included discussions about performance information quality addressing all five GPRAMA requirements, as shown in table 3 and described in more detail in the textbox below. DHS Addressed GPRAMA Requirements in Explaining How Performance Information Quality is Ensured for All Agency Priority Goals In September 2015, we reported that of the 23 APGs in our sample from six agencies, we could only find discussions about performance information quality that addressed all five of the GPRAMA requirements for three APGs, which belonged to DHS. DHS presented information about performance information quality for all three of its APGs in its performance plans and reports. Specifically, DHS published an appendix to its performance plans and reports with detailed discussions of performance information quality for 10 performance measures used to measure progress on these APGs. For each measure, DHS’s appendix described the related program, the scope of the data, the source and collection methodology for the data, and an assessment of data reliability. In our September 2015 report, we recommended that all six of the agencies in our review work with OMB to describe on Performance.gov how they are ensuring the quality of their APGs’ performance information and that the agencies, except for DHS, also describe this in their annual performance plans and reports. We also noted that to help improve the reliability and quality of performance information, OMB and the PIC established the Data Quality Cross-Agency Working Group in February 2015. The group could serve as a vehicle for disseminating good practices in public reporting on data quality. As a result, we also recommended that OMB, working with the PIC, focus on ways the PIC’s data quality working group can improve public reporting for APGs. OMB did not comment on the recommendations, but the six agencies generally concurred or identified actions they planned to take to implement them. OMB and Agencies Generally Agreed with GAO’s Prior Recommendations to Improve GPRAMA Implementation, but Most Have Not Yet Been Implemented Our work examining aspects of GPRAMA implementation and its effects on agency performance management has identified a number of areas in which improvements are needed. Since GPRAMA was enacted in January 2011, we have made a total of 69 recommendations to OMB and agencies aimed at improving its implementation. OMB and the agencies have generally agreed with the recommendations we have made thus far, and have implemented some of them. However, of the 69 recommendations we have made, 55 (about 80 percent) have not yet been implemented, while 14 recommendations (about 20 percent) have been implemented. Additional details on these recommendations and their status are included in appendixes II, III and IV. We made 21 recommendations to OMB and agencies between May 2012, when we issued our first report on GPRAMA implementation, and June 2013, when we issued our previous summary report. Fourteen (about 67 percent) of these recommendations have not been implemented. Between July 2013 and September 2015, we made 48 additional recommendations to OMB and the agencies. Forty-one (about 85 percent) of these recommendations have not been implemented. Figure 11 shows the number of recommendations we have made, by year, and the number that have been implemented. OMB, which has been the focus of most of our recommendations, has implemented just over one-third (14) of the 38 recommendations we have made to it. Because of the agency’s central role in implementing GPRAMA, we made more recommendations to OMB in our work under the act than to all other agencies combined. Most of the actions OMB has taken to implement our recommendations involve updating or issuing new guidance. Agencies have yet to implement any of the 31 recommendations we have made, although we made most (23) of these recommendations in reports that we have issued since July 2015. Specifically, these 23 recommendations were included in our recent work on data-driven reviews and the quality of performance information, and they focus on ensuring that agency data-driven review processes and reporting on the quality of performance information are consistent with GPRAMA requirements, OMB guidance, and leading practices. While OMB has implemented some of our recommendations, some of those that have yet to be implemented focus on long-standing and significant issues. For example, as described previously, we have made several recommendations to identify and assess the contributions of tax expenditures toward executive branch goals, but OMB and agencies have taken little action to address these recommendations. Additionally, we have reported that agencies have difficulty measuring the performance of different program types –such as grants and regulations. We have identified individual examples of these problems, but our work has also shown that some areas—such as customer service—are common problems across multiple agencies. Agencies have not yet implemented recommendations we made in our October 2014 report on agency customer service standards. We have also made numerous recommendations aimed at improving the effectiveness of various aspects of GPRAMA implementation. These recommendations focus on a range of areas, including making federal program inventories more useful, strengthening data-driven review practices, and improving goal leader accountability mechanisms. As we have stated, effective GPRAMA implementation has the potential to improve performance management across government and can help address crosscutting issues, promote the use of performance information, increase alignment of performance with results, and improve transparency. We will continue to monitor OMB’s and agencies’ actions to implement our recommendations. Agency Comments We provided a copy of this draft report to the Director of the Office of Management and Budget for its review and comment. On September 18, 2015, OMB staff provided us with oral comments on the report. OMB staff generally agreed with the information presented in the report, and provided us with technical clarifications, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contract points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The GPRA Modernization Act of 2010 (GPRAMA) includes a provision for us to review implementation of the act at several critical junctures and provide recommendations for improvements to implementation. Specifically, we are required to evaluate and report on how implementation of the act is affecting performance management at the agencies subject to the Chief Financial Officers Act of 1990, as amended, and to evaluate the implementation of cross-agency priority (CAP) goals, federal government performance plans, and related reporting by September 2015. This report pulls together findings from our work related to the act and on federal performance and coordination issues, focusing on ongoing work and work issued since our last summary report on GPRAMA was issued in June 2013, as well as some results from our work on two ongoing engagements. Our objectives for this report were to evaluate how GPRAMA implementation has affected progress in addressing four areas: (1) crosscutting issues; (2) the extent to which performance information is useful and used; (3) aligning daily operations with results; and (4) communication of performance information. To address these objectives, we reviewed GPRAMA, Office of Management and Budget (OMB) guidance, and our past and recent work related to managing for results and the act. We also interviewed OMB and Performance Improvement Council staff. Our recent work under GPRAMA, both ongoing and issued since June 2013, covered the 24 CFO Act agencies and the Army Corps of Engineers-Civil Works. Most (8) of the 12 reports that are the focus of this report used selected agencies as case illustrations. Half of the 12 reports included government-wide reviews, and in some cases involved surveys of all or most of the CFO Act agencies. This report also includes some results from our ongoing work examining the implementation of CAP goals, which we plan to issue at the end of 2015. We identified lessons learned from the interim CAP goal period, and we assessed initial progress implementing the current set of CAP goals. To do this, we selected 7 of the 15 CAP goals for examination, interviewed officials with responsibility for implementing these goals, and reviewed relevant guidance and documentation. In order to provide some insight into both interim and new CAP goals, the team initially randomly selected 2 of each, resulting in selecting Open Data and STEM Education, which were also interim CAP goals, and Job-Creating Investment and Lab-to-Market, which are new CAP goals. Because GAO did recent work on three additional CAP goals—Customer Service, People and Culture, and the Smarter IT Delivery—we also selected those goals. We interviewed OMB and PIC staff responsible for management and implementation of the current CAP goals and responsible agency officials, including CAP goal leaders and members of the seven CAP goal implementation teams. We reviewed OMB and PIC guidance, relevant documentation, and quarterly progress updates published on Performance.gov from the second quarter of fiscal year 2014 through the second quarter of fiscal year 2015, published in June 2015. This report also reflects some results from our ongoing work on major management challenges, which we also plan to issue at the end of 2015. We compared information reported in 24 agency performance plans and reports against GPRAMA requirements and OMB Circular A-11 guidance to identify agency activities and reporting related to major management challenges. We interviewed OMB staff about their guidance related to major management challenges. We also interviewed 24 CFO Act agency performance officials, including performance improvement officers, program offices officials, and, when appropriate, officials from agencies’ Chief Financial Officer and Chief Human Capital Offices to understand how agencies defined and addressed their major management challenges. We conducted this performance audit from April 2015 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Recommendations to OMB from GAO’s Work under the GPRA Modernization Act That Have Been Implemented Table 4 shows recommendations we have made as part of our work under the GPRA Modernization Act (GPRAMA) that the Office of Management and Budget (OMB) has implemented. Appendix III: Recommendations to OMB from GAO Work under the GPRA Modernization Act That Have Not Been Implemented Table 5 shows recommendations we have made to the Office of Management and Budget (OMB) as part of our work under the GPRA Modernization Act (GPRAMA) that have not been implemented. Appendix IV: Recommendations to Agencies from GAO’s Work under the GPRA Modernization Act That Have Not Been Implemented Table 6 shows recommendations we have made to agencies as part of our work under the GPRA Modernization Act (GPRAMA) that have not been implemented. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the above contact, Sarah E. Veale (Assistant Director) and Kathleen Padulchick supervised this review and the development of the resulting report. Margaret M. Adams, Shea Bader, Lisette Baylor, Peter Beck, Elizabeth Curda, Dewi Djunaidy, Deirdre Duffy, Karin Fangman, Jennifer M. Felder, Farrah Graham, Emily Gruenwald, Jonathan Harmatz, Jennifer Kamara, Barbara Lancaster, Dainia Lawes, Benjamin T. Licht, Adam Miles, Michael O’Neill, Lisa Pearson, Steven Putansu, MaryLynn Sergent, Stephanie Shipman, Matthew Sweeney, and Dan Webb also made key contributions. Related GAO Products Managing for Results: Greater Transparency Needed in Public Reporting on the Quality of Performance Information for Selected Agencies’ Priority Goals. GAO-15-788. Washington, D.C.: September 10, 2015. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. Managing for Results: Practices for Effective Agency Strategic Review., GAO-15-602. Washington, D.C.: July 29, 2015. Managing for Results: Agencies Report Positive Effects of Data-Driven Reviews on Performance but Some Should Strengthen Practices. GAO-15-579. Washington, D.C.: July 7, 2015. 2015 Annual Report: Additional Opportunities Exist to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. Washington D.C.: April 14, 2015. Fragmentation, Overlap, and Duplication: An Evaluation and Management Guide. GAO-15-49SP. Washington, D.C.: April 14, 2015. Government Efficiency and Effectiveness: Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-522T. Washington, D.C. April 14, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity. GAO-15-25. Washington, D.C.: November 13, 2014. Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014. Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service. GAO-15-84. Washington, D.C.: October 24, 2014. Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions. GAO-14-747. Washington, D.C.: September 26, 2014. Managing for Results: Enhanced Goal Leader Accountability and Collaboration Could Further Improve Agency Performance. GAO-14-639. Washington, D.C.: July 22, 2014. Managing for Results: OMB Should Strengthen Reviews of Cross-Agency Goals. GAO-14-526. Washington, D.C.: June 10, 2014. Government Efficiency and Effectiveness: Views on the Progress and Plans for Addressing Government-wide Management Challenges. GAO-14-436T. Washington, D.C.: March 12, 2014. Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: February 14, 2014. Financial and Performance Management: More Reliable and Complete Information Needed to Address Federal Management and Fiscal Challenges. GAO-13-752T. Washington, D.C.: July 10, 2013. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013.
Effective implementation of GPRAMA can help address significant and long-standing budget, management, and performance challenges the federal government faces. This report is the latest in a series of GAO work in response to a statutory provision to review GPRAMA implementation. It examines how implementation has affected progress in addressing (1) crosscutting issues; (2) the extent to which performance information is useful and used; (3) alignment of daily operations with results; and (4) communication of performance information. To address these objectives, GAO reviewed GPRAMA and related guidance, recent and ongoing work related to these four areas, and the status of the 69 recommendations made to OMB and agencies as part of GAO's prior work on GPRAMA implementation. GAO also interviewed OMB and Performance Improvement Council (PIC) staff. GAO included work issued since June 2013, when GAO issued the previous summary report on GPRAMA's initial implementation. GAO is not making any new recommendations in this report. OMB and agencies generally agreed with the 69 related recommendations GAO made between 2012, when GAO issued its first report in response to the statutory provision in GPRAMA, and now, but most recommendations (about 80 percent) have not yet been implemented. GAO shared a draft of this report with OMB. OMB staff generally agreed with the information presented in the report and provided technical clarifications, which GAO incorporated as appropriate. GAO's work over the past 2 years shows that implementation of the GPRA Modernization Act (GPRAMA) continues to be uneven, with varying effects on agencies' performance management. Some progress has been made in areas where GAO has made prior recommendations; however, GAO has continued to identify a range of long-standing challenges in the four areas discussed below. The executive branch still needs to take additional actions to address crosscutting issues, but the Office of Management and Budget (OMB) has increased emphasis on governance of cross-agency priority (CAP) goals. For example, OMB has issued new guidance and governance for CAP goals, which cover areas where cross-agency collaboration is needed. However, more effective implementation of GPRAMA requirements, such as the requirement that agencies develop inventories of their programs, would help address crosscutting issues by providing decision makers with comprehensive program and funding information. Ensuring performance information is useful and used by managers remains a challenge, but OMB and agencies are implementing processes that may lead to improvements. Agencies continue to have problems effectively using performance information. GAO's analysis indicates that agencies' reported use of performance information generally did not improve between 2007 and 2013. However, as OMB and agencies continue to implement data-driven and strategic review processes, the use of performance information should improve. For example, GAO found that nearly all of the 22 agencies that reported holding in-person data-driven reviews of agency priority goals (APG)—which represent agencies' highest priorities—said they use the reviews to assess progress on APGs and that they have had a positive effect on goal progress. Similarly, some agencies have increased their use of or enhanced their efforts to improve their capacity to use other evidence-based tools, such as program evaluations. Agencies continue to face challenges linking individual and agency performance to results . GPRAMA provisions, such as the requirement that agencies identify a goal leader responsible for APG achievement, promote linkages between individual performance and agency results. GAO has recommended that agencies strengthen some mechanisms that can promote this connection, such as through Senior Executive Service performance plans. Agencies also need to take additional actions to address GAO recommendations on measuring performance in a number of areas, such as customer service. OMB and agencies have not clearly communicated reliable and complete financial and performance information, but more effective implementation of GPRAMA requirements would improve transparency. GPRAMA requirements for reporting on the quality of performance information have the potential to improve the transparency of that information. While OMB has updated some of its guidance, improved reporting on the quality of information is not expected from the agencies until the fiscal year 2016 and 2017 reporting cycle.
Background In 1981, FAA began a program to replace and upgrade ATC facilities and equipment. However, systemic management issues, such as frequent turnover in agency leadership, an ineffective organizational culture, and problems with its acquisition process, contributed to cost growth, schedule slippages, and performance shortfalls, leading us to classify FAA’s ATC modernization program as high risk in 1995. That same year, Congress passed legislation that exempted FAA from most federal personnel and acquisition laws and regulations on the premise that FAA needed such freedom to better manage ATC modernization. In December 2000, President Clinton signed an executive order and Congress passed supporting legislation that, together, provided FAA with the authority to create ATO as a performance-based organization (PBO) to control and improve FAA’s management of the modernization effort. In February 2004, FAA reorganized, transferring 36,000 employees, most of whom worked in air traffic services and research and acquisitions, to ATO. Even with the creation of ATO, the current approach to managing air transportation is becoming increasingly inefficient and operationally obsolete. In late 2003, Congress created JPDO to plan NGATS, a system intended to accommodate what is expected to be three times more air traffic by 2025 than there is today. JPDO’s scope is broader than traditional ATC modernization in that it is “airport curb-to-airport curb,” encompassing such issues as security screening and environmental concerns. Additionally, JPDO’s approach will require unprecedented collaboration and consensus among many stakeholders—federal and nonfederal—about necessary system capabilities, equipment, procedures, and regulations. Key to this collaboration will be the work of JPDO’s seven partner agencies: the Departments of Transportation, Commerce, Defense, and Homeland Security; FAA; the National Aeronautics and Space Administration (NASA); and the White House Office of Science and Technology Policy. Each of these agencies will play a role in creating NGATS. For example, the Department of Defense has deployed “network centric” systems, originally developed for the battlefield, that are being considered as a framework to provide all users of the national airspace system—FAA and the Departments of Defense and Homeland Security— with a common view of that system. JPDO began its initial operations in early 2004. A Senior Policy Committee, chaired by the Secretary of Transportation and including senior representatives from each of the participating departments and agencies, provides oversight to JPDO. JPDO is located within FAA and reports to the FAA Administrator and to the Chief Operating Officer within ATO. See figure 1. Concurrent with JPDO’s efforts, the European Commission is conducting a project to harmonize and modernize the pan-European air traffic management system. Known as the Single European Sky Air Traffic Management Research Programme (SESAR), the project is overseen by the European Organization for the Safety of Air Navigation (Eurocontrol). Eurocontrol has contracted out the work of SESAR to a 30-member consortium of airlines, air navigation service providers, airports, manufacturers, and others. The consortium is receiving 60 million euros ($73 million) to conduct a 2-year definition phase and produce a master plan for SESAR. ATO Has Taken Steps to Improve ATC Modernization, but Challenges Remain To improve its management of ATC modernization, ATO has taken steps toward having a more results-oriented culture; a flatter, more accountable management structure; and more businesslike management and acquisition processes. In addition, ATO is implementing recommendations we have made to address systemic factors that have contributed to cost, schedule, and performance problems with major ATC acquisitions. For the past 2 fiscal years, FAA has met its acquisition performance goals. However, FAA still faces human capital challenges, such as institutionalizing a results-oriented culture and hiring thousands of air traffic controllers during the next decade. FAA also faces challenges in keeping its major system acquisitions on track. ATO Has Begun Efforts to Transform Its Culture, Structures, and Processes to Operate More Efficiently as a PBO ATO is working to establish the results-oriented organizational culture, structures, and processes that are generally associated with a PBO. FAA, through ATO, has established a strategic goal to become a results-oriented organization. One key element of ATO’s strategy is to identify core values and track employees’ attitudes about those values to monitor cultural change. To implement this element, ATO has identified multiple core values: integrity and honesty, accountability and responsibility, commitment to excellence, commitment to people, and fiscal responsibility. ATO is using FAA Employee Attitude Survey data to determine employee attitudes toward these values and has developed a baseline of employee attitudes for use in monitoring changes in attitudes over time. Another key element of ATO’s strategy is to establish a viable, stable, and sustainable organization that can transcend changes in leadership. In our past work, we noted that FAA’s acquisitions workforce did not have an organizational culture and structure that supported the acquisition and deployment of sophisticated technology on the scale used in the national airspace system. For example, acquisitions were impaired because employees and managers acted in ways that did not reflect a strong commitment to mission focus, accountability, adaptability, and coordination. Specifically, officials performed little or no mission needs analysis, made unrealistic cost and schedule estimates, and moved to producing systems before completing their development. We also reported that accountability was not well defined or enforced for decisions on requirements and contract oversight. Additionally, vertical lines of authority impaired communication across organizations that needed to coordinate, particularly the acquisitions and operations areas of FAA. Finally, we reported that FAA’s culture of conservatism and conformity rewarded employees for simply following the rules rather than considering innovation. We recommended that FAA develop a strategy for cultural change. Although FAA responded to our recommendation by developing a cultural change strategy and some other related initiatives, these initiatives were neither fully implemented nor sustained. ATO has put a new management structure in place and established more businesslike management and acquisition processes. ATO is structured as a discrete management unit within FAA and is headed by a Chief Operating Officer (COO), who is appointed to a 5-year term. It has become a flatter organization, with fewer management layers. As a result, managers are in closer contact with the services they deliver. ATO is also taking some steps to break down the vertical lines of authority, or organizational stovepipes, that we found hindered communication and coordination across FAA. For example, the COO holds daily meetings with the managers of ATO’s departments and holds the managers collectively responsible for the success of ATO through the performance management system. According to the COO, the daily meetings have been a revelation for some managers who were formerly only focused on and responsible for their own departments. ATO has begun to revise its business processes to increase accountability. For example, it has recently established a cost accounting system and made the units that deliver services within each department responsible for managing their own costs. Thus, each unit manager develops an operating budget and is held accountable for holding costs within specific targets. Managers track the costs of their unit’s operations, facilities and equipment, and overhead and use this information to determine the costs of the services their unit provides. Managers are evaluated and rewarded according to how well they hold their costs within established targets. Our work has shown that it is important, when implementing organizational transformations, to use a performance management system to assure accountability for change. Finally, ATO is revising its acquisition processes, as we recommended, and taking steps to improve oversight, operational efficiency, and cost control. To ensure executive-level oversight of all key decisions, FAA plans to revise its Acquisition Management System to incorporate key decision points in a knowledge-based product development process by June 2006. Moreover, as we have reported, ATO formed an executive council to review major acquisitions before they are sent to FAA’s Joint Resources Council. To better manage cost growth, this executive council also reviews project breaches of 5 percent or more in cost, schedule, or performance. FAA has issued guidance on how to develop and use pricing, including guidelines for disclosing the levels of uncertainty and imprecision that are inherent in cost estimates for major ATC systems. Additionally, ATO has begun to base funding decisions for system acquisitions on a system’s expected contribution to controlling operating costs. Finally, FAA is creating a training framework for its acquisition workforce that mirrors effective human capital practices that we have identified, and the agency is taking steps to measure the effectiveness of its training. ATO Has Begun to Address Systemic Causes of Delays and Cost Overruns in ATC Modernization ATO has begun taking actions to address systemic factors that our work has shown contribute individually or collectively to schedule delays or cost overruns in major system acquisitions. Such factors include funding acquisitions at lower levels than called for in agency planning documents, not considering all information technology investments as a complete portfolio, not adequately defining a system’s requirements or understanding software complexity, and not adequately considering customer needs in a system’s functional and performance requirements. Funding acquisitions at lower levels than called for in agency planning documents. When FAA initiates a major system acquisition, it estimates, and its top management approves, the funding plan for each year. However, when budget constraints do not allow all system acquisitions to be fully funded at the previously approved levels, FAA must decide which programs to fund and which to cut, according to its priorities. When a system acquisition does not receive the annual funding called for in its planning documents, the acquisition may fall behind schedule. This may also postpone the benefits of the new system and can require FAA to continue operating and maintaining the older equipment that the acquisition is intended to replace. For example, reduced funding was one factor that caused FAA to reduce the initial deployment of its ASR-11 digital radar system from 111 systems to 66 systems, as well as defer decisions on further deployment pending additional study. In the meantime, FAA will have to continue maintaining the aging analog radars that the new system was intended to replace. To address this issue, we recommended that, to help ensure key administration and congressional decision makers have more complete information, FAA identify and annually report which activities under the ATC modernization program have had funding deferred, reduced, or eliminated, and provide detailed information on how those decisions have affected FAA’s ability to modernize the ATC system and related components in the near, mid, and longer term. Such information would make clear how constrained budgets will affect modernization of the national airspace system and how FAA is working to live within its means. According to FAA, the agency intends to better inform Congress in the future by providing information in its capital investment plan, submitted to Congress annually with the President’s Budget, that will identify changes from the preceding year. Not considering all information technology investments as a complete portfolio. We pointed out that FAA does not evaluate projects beyond the first 2 years of service to ensure that they are aligned with organizational goals. Consequently, the agency could not ensure that projects with a longer service history, totaling about $1.3 billion per year, were still aligned with FAA’s strategic plans and business goals and objectives. We recommended that FAA include these projects in its investment portfolio management for review. FAA’s current version of its Acquisition Management Policy calls for periodic monitoring of in-service systems to collect and analyze performance data to use as the basis for sustained deployment. However, we have not yet evaluated FAA’s implementation of this policy. Not adequately defining a system’s requirements or understanding software complexity. Inadequate or poorly defined requirements may contribute to the inability of system acquisitions to meet their original cost, schedule, or performance targets, since developing or redefining requirements as an acquisition progresses takes time and can be costly. In addition, unplanned development work may occur when the agency misjudges the extent to which a commercial-off-the-shelf or nondevelopmental item, such as one procured by another agency, will meet FAA’s needs. For example, FAA sought to use an Army radio as the core of a new digital ATC communication system, but found that the radio did not meet established interference requirements, which contributed to schedule delays. When FAA underestimates the complexity of software development or misjudges the difficulty of modifying available software to fulfill its mission needs, acquisitions may take longer and cost more than expected. FAA’s acquisition of the Local Area Augmentation System (LAAS)—a system that would allow precision instrument approaches and landings in all weather conditions—is a case in point. FAA underestimated LAAS’s software complexity because it inadequately assessed the system’s technology maturity. In particular, the agency misunderstood the potential for radio interference through the atmosphere, which could limit LAAS’s operations. The technical difficulties encountered with LAAS, among other things, led FAA to suspend this acquisition. To reduce these risks, FAA has developed and applied a process improvement model to a number of acquisition projects. This model is used to assess the maturity of FAA’s software and systems capabilities. As we reported, this approach has resulted in enhanced productivity, higher quality, greater ability to predict schedules and resources, better morale, and improved communication and teamwork. However, FAA did not mandate the use of the model throughout the organization. In response to our recommendation that FAA institutionalize the model’s use throughout the organization, FAA has begun developing a requirement that acquisition projects have process improvement activities in place before seeking approval from FAA’s investment review board. Not adequately considering customer needs in a system’s functional and performance requirements. We reported that FAA was not applying best practices used in Department of Defense and commercial product development. Best practices include balancing customer needs with available resources. According to FAA, the agency is now including in its acquisition guidance a requirement that top-level functional and performance requirements reflect the needs of the customer. FAA Met Its Acquisition Performance Goal for the Second Consecutive Year, but Use of Revised Milestones Does Not Provide Consistent Benchmarks FAA has now met its acquisitions performance goal 2 years in a row. The goal for fiscal years 2004 and 2005 was to have 80 percent of its system acquisitions on schedule and within 10 percent of budget. The goal gradually increases to 90 percent by fiscal year 2008. The increase will make FAA’s acquisition performance goal consistent with targets set in the Department of Transportation’s strategic plan and will comply with the Federal Acquisition Streamlining Act of 1994. Having such a goal is consistent with the President’s Management Agenda, which calls for a commitment to achieve immediate, concrete, and measurable results in the near term, and meeting this goal is a positive step toward better acquisition management. However, if the milestones for an acquisition have changed over the years to reflect changes in its cost and schedule, then using those revised milestones may not provide a complete picture of the acquisition’s progress over time. For example, the milestones for 3 of the 16 major system acquisitions that we reviewed in detail during 2004 and 2005 were being revised to reflect cost or schedule changes during 2005. These revised milestones, together with revised targets for meeting them, will become the new milestones for fiscal year 2006. While revising milestones and targets that are no longer valid is an appropriate management action, using revised rather than original targets for measuring performance does not provide a consistent benchmark over time. The extent to which an acquisition meets its annual performance targets is one measure of its performance and should be viewed together with other measures, such as its progress against original and revised baselines. The variance reports provided to the FAA Administrator and to Congress may also be useful in evaluating an acquisition’s performance. Since fiscal year 2003, the number of acquisition programs measured by FAA has varied from 31 to 42. According to FAA, the number varies from year to year, in part, because some programs reach completion and others are initiated. The programs that are selected each fiscal year represent a cross section of ATO programs, including investments in new capabilities and others that are ready for use without modification. FAA’s Portfolio of Goals, which provides supplementary information on the agency’s performance goals, asserts that no bias exists in the selection of milestones for performance review, but does not state the basis for this conclusion. The portfolio also states that the milestones selected represent the program office’s determination of the efforts that are “critical” or important enough to warrant inclusion in the acquisition performance goal for the year. However, we have not conducted a detailed examination of the reliability and validity of FAA’s metrics for its acquisition program performance. ATO Faces Human Capital Challenges in Creating a More Results-Oriented Culture and Hiring and Training Thousands of Air Traffic Controllers ATO faces a challenge in sustaining and institutionalizing management focus on its transformation to an effective PBO and a results-oriented culture. Our work has shown that successful transformations and the institutionalization of change in large public and private organizations can take 5 to 7 years or more to fully implement. To ensure that FAA’s focus on cultural change does not diminish as it did in the past, we recommended that FAA provide sustained oversight of efforts to achieve a more results-oriented workforce culture, including periodically monitoring the agency’s progress against baseline data. As discussed, ATO has established a baseline of employee attitudes for use in monitoring cultural change, and similar long-term management attention will be needed to conduct this monitoring and assess ATO’s progress toward becoming a PBO. FAA also faces the challenge of hiring and training thousands of air traffic controllers during the coming decade. According to its controller staffing plan, FAA expects to lose about 11,000 air traffic controllers due to voluntary retirements or mandatory retirements at age 56, as well as other reasons. These retirements stem from the 1981 controller strike, when President Ronald Reagan fired over 10,000 air traffic controllers and FAA then had to quickly rebuild the controller workforce. From 1982 through 1991, FAA hired an average of 2,655 controllers per year. These controllers will become eligible for retirement during the next decade. To replace these controllers, as well as those who will leave for other reasons, and to accommodate forecasted increases in air traffic, FAA’s plan calls for hiring a total of 12,500 new controllers over the next 10 years. FAA Faces Challenges in Ensuring Stakeholder Involvement in Major System Acquisitions and Keeping Acquisitions on Schedule and within Budget Adequately involving stakeholders in a system’s development is important to ensure that the system meets users’ needs. In the past, air traffic controllers were permanently assigned to FAA’s major system acquisition program offices and provided input into air traffic control modernization projects. In June 2005, FAA terminated this arrangement because of budget constraints. According to FAA, it now plans to obtain the subject- matter expertise of air traffic controllers or other stakeholders as needed in major system acquisitions. It remains to be seen whether this approach will be sufficient to avoid problems such as FAA experienced when inadequate stakeholder involvement in the development of new air traffic controller workstations (known as the Standard Terminal Automation Replacement System (STARS)) contributed to unplanned work, which, in turn, led to significant cost growth and schedule delays. Three systems—all communications-related—missed the fiscal year 2005 acquisition performance goal for schedule. According to FAA, the $310 million FTI acquisition, which is replacing costly existing networks of separately managed systems and services by integrating advanced telecommunications services, was behind schedule because initial plans did not allow sufficient time for installations. To complete the installations in fiscal year 2008, as originally scheduled, FAA initiated a plan to put the program back on schedule and has met the plan’s milestones since August 2005. Two other communications acquisition programs also missed the acquisition performance goal for schedule—the $325 million Next Generation Air-to-Ground Communication system, segment 1A, which replaces analog communication systems with digital systems, and the $85 million Ultra High Frequency Radio Replacement, which replaces aging equipment used to communicate with Department of Defense aircraft. According to an FAA official, as the agency assessed its priorities for fiscal year 2005, a decision was made that these programs would receive fewer resources. The resources that were then available were not sufficient to allow the programs to meet established milestones. To the extent that delays in FTI persist, FAA will lose the cost savings that the system was expected to produce. The Department of Transportation’s Office of the Inspector General has reported that FAA did not realize $32.6 million in anticipated operating cost savings in fiscal year 2005 because of the limited progress made in disconnecting legacy circuits. The office also reported that without a nearly tenfold increase in its rate of transferring service to FTI and disconnecting legacy circuits, FAA stands to miss out on an additional $102 million in cost savings in fiscal year 2006. As an alternative to continuing the current FTI program, some experts have suggested that FAA consider outsourcing this activity, as it recently did for its flight service stations. In summary, ATO has made a number of promising moves toward becoming a results-oriented organization, and we view ATO’s efforts to improve its culture, management, and acquisitions process as positive steps. However, ATO has been established for only slightly more than 2 years. Work remains to ensure that these processes become institutionalized. Although it is still too early to evaluate the effectiveness of many of these steps, we are monitoring ATO’s progress. As ATO moves forward, it will play a key role in implementing NGATS, as planned by JPDO. I will now discuss the status of JPDO’s planning efforts. JPDO Has Made Progress in Planning for NGATS, but Faces Challenges and Opportunities in Several Areas JPDO has engaged in practices that facilitate collaboration among its partner agencies, but faces challenges in continuing to leverage resources from these agencies and in defining the roles and responsibilities of the various entities involved. JPDO has been structured to involve both federal and nonfederal stakeholders, but maintaining the support of nonfederal stakeholders over the long term and soliciting the participation of some stakeholders may prove difficult. JPDO is using a reasonable process for technical planning, but several key technical planning activities remain. Lastly, JPDO is including efforts toward global harmonization in its planning for NGATS, but other opportunities for cooperation have not been fully explored. JPDO Has Begun to Facilitate Collaboration among Federal Agencies, but Faces Challenges in Continuing to Leverage Resources and in Defining Roles and Responsibilities Our work to date shows that JPDO is facilitating the federal interagency collaboration that is central to its mission and legislative mandate. According to our research, agencies must have a clear and compelling rationale for working together to overcome significant differences in their missions, cultures, and established ways of doing business. In developing JPDO’s integrated plan, the partner agencies agreed to a vision statement and eight strategies that broadly address the goals and objectives for NGATS. These strategies formed the basis for JPDO’s eight integrated product teams (IPT), and various partner agencies have taken the lead on specific strategies. Our research has also shown that it is important for collaborating agencies to include the human, technological, and physical resources needed to initiate or sustain their collaborative effort. To leverage human resources, JPDO has staffed the various levels of its organization with partner-agency employees, many of whom work part time for JPDO. To leverage technological resources, JPDO conducted an interagency program review of its partner agencies’ research and development programs to identify work that could support NGATS. Through this process, JPDO identified early opportunities that could be pursued during fiscal year 2007 to produce tangible results for NGATS, such as the Automatic Dependent Surveillance-Broadcast (ADS-B) program at FAA. However, while JPDO’s legislation, integrated plan, and governance structure provide the framework for institutionalizing collaboration among multiple federal agencies, JPDO is fundamentally a planning and coordinating body that lacks authority over the key human and technological resources needed to continue developing plans and system requirements for NGATS. Consequently, the ability to continue leveraging resources of the partner agencies will be critical to JPDO’s success. However, beginning around 2008, JPDO expects a significant increase in its IPTs’ workloads. JPDO officials told us that although the partner agencies have not yet expressed concerns over the time that their employees spend on JPDO work, it remains to be seen whether agencies will be willing to allow their staff to devote more of their time to JPDO. In addition, JPDO anticipates needing more agency resources to plan and implement demonstrations of potential technologies to illustrate some of the early benefits that could be achieved from the transformation to NGATS. This challenge of leveraging resources arises, in part, because the partner agencies have a variety of missions and priorities other than supporting NGATS. NASA, for example, while conducting key aeronautical and safety research relevant to NGATS, nonetheless has other competing missions. Recently, NASA’s management determined that for the agency to meet its other mission needs, it would not develop new aviation technologies to the extent that it had in the past. As a result, additional development costs related to NGATS will have to be borne by JPDO, industry, or some combination. JPDO also faces the challenge of clearly defining its partner agencies’ roles and responsibilities. Our work has shown that collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaboration will be led. In JPDO’s case, there is no formal, long-term agreement on the partner agencies’ roles and responsibilities in creating NGATS. According to JPDO officials, a memorandum of understanding that would define the partner agencies’ relationships was being developed as of August 2005, but has not yet been completed. Defining roles and responsibilities is particularly important between JPDO and ATO, because both organizations have responsibilities related to planning the national airspace system’s modernization. ATO has primary responsibility for the ATC system’s current and near-term modernization, while JPDO has responsibility for planning and coordinating a transformation to NGATS over the next 20 years. The roles and responsibilities of each office are currently being worked out. ATO now plans to expand its Operational Evolution Plan so that it applies FAA-wide and represents FAA’s piece of JPDO’s overall NGATS plan. As the roles and responsibilities of the two offices become more clearly defined, there is also a need to better communicate these decisions to stakeholders. JPDO Is Structured to Involve Federal and Nonfederal Stakeholders, but Faces a Challenge in Soliciting and Maintaining Support over the Long Term JPDO has structured itself to involve federal and nonfederal stakeholders throughout its organization. Our work has shown that involving stakeholders can, among other things, increase their support for a collaborative effort. Federal stakeholders from the partner agencies serve on JPDO’s Senior Policy Committee, board, and IPTs. Nonfederal stakeholders may participate through the NGATS Institute (the Institute). Through the Institute, JPDO obtained the participation of over 180 stakeholders from over 70 organizations for the IPTs. The NGATS Institute Management Council, composed of top officials and representatives from the aviation community, oversees the policy and recommendations of the Institute and provides a means for advancing consensus positions on critical NGATS issues. Although JPDO has developed the mechanisms for involving stakeholders and brought stakeholders into the process, it faces challenges in sustaining nonfederal stakeholders’ participation over the long term. Much as with the federal partner agencies, JPDO has no direct authority over the human, technical, or financial resources of its nonfederal stakeholders. To date, these stakeholders’ investment in NGATS has been through their part-time, pro bono participation on the IPTs and the NGATS Institute Management Council. The challenge for JPDO is to maintain the interest and enthusiasm of these nonfederal stakeholders, which will have to juggle their own multiple priorities and resource demands, even though some of the tangible benefits of NGATS may not be realized for several years. For example, stakeholders’ support will be important for programs such as System Wide Information Management (SWIM), which is a prerequisite to future benefits, but may not produce tangible benefits in the near term. In the wake of past national airspace modernization efforts, JPDO also faces the challenge of convincing nonfederal stakeholders that the government is financially committed to NGATS. While most of FAA’s major ATC acquisition programs are currently on track, earlier attempts at modernizing the national airspace system encountered many difficulties. In one instance, for example, FAA developed a datalink communications system that transmitted scripted e-mail-like messages between controllers and pilots. One airline equipped its aircraft with this new technology, but because of funding cuts, FAA ended up canceling the program. In a similar vein, we have reported that some aviation stakeholders expressed concern that FAA may not follow through with its airspace redesign efforts and are hesitant to invest in equipment unless they are sure that FAA’s efforts will continue. One expert suggested to us that the government might mitigate this issue by making an initial investment in a specific technology before requesting that airlines or other industry stakeholders purchase equipment. In addition to maintaining stakeholder involvement, JPDO faces challenges in obtaining the participation of all stakeholders. In particular, JPDO does not involve current air traffic controllers, who will play a key role in NGATS. The current air traffic control system is based primarily on the premise that air traffic controllers direct pilots to maintain safe separation between aircraft. In NGATS, this premise could change and, accordingly, JPDO has recognized the need to conduct human factors research on such issues, including how tasks should be allocated between humans and automated systems, and how the existing allocation of responsibilities between pilots and air traffic controllers might change. The input of current air traffic controllers who have recent experience controlling aircraft is important in considering human factors and safety issues, as our work on STARS has shown. However, as mentioned, no current air traffic controllers are involved in NGATS. In June 2005, FAA terminated its liaison program through which air traffic controllers had been assigned as liaisons to its major system acquisition program offices. This included the liaison assigned to JPDO. Since that time, the National Air Traffic Controllers Association (NATCA), the labor union that represents air traffic controllers, has not been a participant in planning NGATS. Although the NGATS Institute Management Council includes a seat for the union, a NATCA official told us that the union’s head had been unable to attend the council’s meetings. According to JPDO officials, the council has left a seat open in hopes that the controllers will participate in NGATS after a new labor-management agreement between NATCA and FAA has been settled. JPDO Is Using a Reasonable Process for Technical Planning, but Has Not Yet Completed Key Activities To conduct the technical planning needed to develop NGATS, JPDO is using an iterative process that appears to be reasonable given the complexity of NGATS. Two fundamental pieces of this technical planning are modeling and developing an enterprise architecture (a tool, or blueprint, for understanding and planning complex systems). JPDO has formed an Evaluation and Analysis Division (EAD), composed of FAA and NASA employees and contractors, to assemble a suite of models that will help JPDO refine its plans for NGATS and iteratively narrow the range of potential solutions. For example, EAD has used modeling to begin studying how possible changes in the duties of key individuals, such as air traffic controllers, could affect the workload and performance of others, such as airport ground personnel. NGATS could shift some tasks now done by air traffic controllers to pilots. However, EAD has not yet begun to model the effect of this shift on pilots’ performance because, according to an EAD official, a suitable model has not yet been incorporated into the modeling tool suite. According to EAD, addressing this issue is difficult because data on pilot behavior are not readily available to use in creating such models. Furthermore, EAD has not studied the training implications of various NGATS-proposed solutions because further definition of the concept of operations for these solutions has not been completed. As the concept of operations matures, it will be important for air traffic controllers and other affected stakeholders to provide their perspectives on these modeling efforts. In addition, as the concept of operations and plans for sequencing equipment matures, EAD will be able to study the extent to which new air traffic controllers will have to be trained to operate both the old and the new equipment. To develop an enterprise architecture, JPDO has taken several important first steps and is following several effective practices that we have identified for enterprise architecture development. However, JPDO’s enterprise architecture is currently a work in progress. Development of the NGATS enterprise architecture is critical to JPDO’s planning efforts, and many of JPDO’s future activities will depend on the robustness and timeliness of its architecture development. The enterprise architecture will describe ATO’s operation of the current national airspace system, JPDO’s plans for the NGATS, and the sequence of steps needed to transition between them. The enterprise architecture will provide the means for coordinating among the partner agencies and private sector manufacturers, aligning relevant research and development activities, integrating equipment, and estimating system costs. To date, JPDO has formed an Enterprise Architecture Division and plans to have an early version of the architecture by the end of fiscal year 2006. The office has established and filled a chief architect position and established an NGATS Architecture Council composed of representatives from each partner agency’s chief architect office. This provides the organizational structure and oversight needed to develop an enterprise architecture. JPDO’s phased “build a little, test a little” approach for developing and refining its enterprise architecture is similar to a process that we have advocated for FAA’s major system acquisition programs. In addition, this phased development process will allow JPDO to incorporate evolving market forces and technologies in its architecture and thus better manage change. JPDO Is Planning for Global Harmonization as Part of the NGATS Effort, but Additional Cooperative Activities Have Not Been Fully Explored Global harmonization is one of the important strategies underlying NGATS, and JPDO has started to plan for harmonization. JPDO officials said they recognize the need to work toward the global harmonization of systems and have met with officials from various parts of the world, including China, East Asia, and Europe, to assess the potential for cooperative NGATS demonstrations. JPDO has a global harmonization IPT, led by managers from ATO’s International Operations Planning Services International and FAA’s Office of International Aviation. The IPT’s mission is to harmonize equipment and operations globally and advocate for the adoption of U.S.-preferred transformation concepts, technologies, procedures, and standards. The harmonization IPT finalized its charter in March 2006 and is working to develop an international strategy and outreach plan. In addition to external efforts, the harmonization IPT plans to work as a crosscutting IPT that will raise awareness of global interoperability and standards issues within the other IPTs as they consider product development. JPDO officials have noted the need to work toward harmonization with the Single European Sky Air Traffic Management Research Programme (SESAR), a major initiative to modernize the airspace system of the European Union. Eurocontrol has been designated to conduct SESAR to both modernize and integrate European air traffic management systems. While similar in many respects to the NGATS planning effort, Eurocontrol has contracted with an industry consortium to conduct the 2-year planning phase of the project. According to several European officials with whom we spoke, global harmonization (and harmonization with the U.S. system specifically) is considered to be a key ingredient for the success of SESAR. Several of these officials said that although the European organization invited JPDO to participate as a full member in SESAR and the organization has indicated its willingness to have reciprocal participation with the United States, personnel exchanges are just beginning to occur. JPDO officials recognize the importance of cooperative efforts and noted that if Europe and the United States were to implement different and incompatible standards and technologies, there could be a major adverse impact on airlines that serve international markets. Nonetheless, these officials point out that JPDO, as a U.S. government entity, could not participate as a member in a private industry effort like the SESAR consortium. FAA is, however, a member of the European Commission’s Industry Consultation Body, which provides advice to SESAR. The JPDO officials also said personnel exchanges and other cooperative activities, such as information exchanges and a joint working group on technical standards, are now being conducted under a memorandum of cooperation between FAA and Eurocontrol. While FAA and the harmonization IPT are planning cooperative activities, our research has identified several other areas where cooperation does not appear to be fully developed. For example, the SESAR and NGATS initiatives, despite their similarities, do not have coordination activities such as peer reviews of relevant research, cooperation on safety analysis (such as through the pooling of accident data), or validation of technologies. It is possible that greater cooperation and exchange between NGATS and SESAR might develop once planning has progressed to the development and validation stage. ATO and JPDO are Working to Address Funding Challenges In the face of rising operating costs, ATO has implemented a number of cost control initiatives. Savings realized from ATO efforts to control costs could be used for modernization efforts, including the development of NGATS. Funding flexibility could also help to address these challenges. In addition to the cost savings efforts initiated by ATO, JPDO is identifying potential ways to leverage available resources to support initial NGATS initiatives. ATO Has Begun to Take a Number of Steps to Address Rising Operating Costs To address rising operating costs and improve performance, ATO has developed a formal cost control program that includes completing the development of a cost accounting system and using information from the system to conduct activity value analysis—that is, to assess the value of its products and services to its customers. The cost control program also includes conducting annual business case reviews, primarily for its capital programs, and assisting Congress in identifying funding priorities. To control costs, ATO is decommissioning and consolidating ATC facilities, improving its contract management, pursuing cost reduction opportunities through outsourcing, and avoiding or reducing personnel costs through workforce attrition and efficiency gains. ATO has made significant progress in developing its cost accounting system. In doing so, ATO is addressing our long-standing concern that FAA lacked the cost information necessary for decision making and could not adequately account for its activities and major projects, such as its ATC modernization programs. ATO officials have also noted that the system will enhance their ability to accurately determine the costs of providing specific services or products and to compare those costs with the value provided to the organization’s customers. This information will be valuable in prioritizing activities and weighing the costs and benefits of various courses of action when developing and supporting proposed budgets. It will also allow FAA to base funding decisions for system acquisitions on their contribution to reducing the agency’s operating costs. These efforts facilitate ATO’s activity value analysis, through which ATO determines (1) the costs of the products and services provided, (2) the factors that affect the costs, and (3) the value of these products and services, as perceived by ATO’s customers. By comparing the costs of providing services with their value to customers, ATO officials expect the process to help them eliminate activities with low customer value and determine ways to reduce the costs of activities with high customer value. ATO expects business case reviews of its capital programs to reduce its ATC modernization costs by about $62 million in fiscal year 2007 and by nearly $400 million by fiscal year 2008. Over the last 2 years, ATO conducted business case reviews of 81 programs, including 67 capital programs and 14 operations programs. Through these annual reviews, ATO examines each program to ensure that its funding is justified, and if ATO determines that the funding is not justified, it may terminate or restructure the program. To date, ATO has terminated or restructured 6 programs after reviewing the business cases for them, including its Medium Intensity Airport Weather System (MIAWS) program. ATO canceled this program’s $4 million spending request. ATO also reduced the funding for a radar replacement program after reviewing its business case and identifying opportunities to conduct more effective maintenance rather than replace radars. Through these combined efforts, FAA expects to reduce costs by $32 million in fiscal year 2007. ATO is working with Congress to discuss proposed projects and maximize capital funds, as we previously recommended. ATO reported that Congress designated approximately $300 million for specific projects in fiscal years 2003 and 2004. In fiscal year 2005, according to ATO, designated projects totaled almost $430 million. In fiscal year 2006, ATO staff met with Senate offices to provide input on projects, and the value of the congressionally designated projects declined, as indicated in table 1. ATO has saved about $84 million to date through initiatives to control its costs. For example, ATO has begun to decommission ground-based navigational aids, such as compass locators, outer markers and nondirectional radio beacons, and to close related ATC facilities as it transitions to a satellite-based navigation system. In fiscal year 2005, ATO decommissioned 177 navigational aids for a savings of $2.9 million. However, ATO has thousands of navigational aids in use, many of which could be decommissioned during the transition to NGATS. Consolidating ATC facilities, including terminal radar approach control (TRACON) facilities and air traffic control centers, can also save costs. According to one estimate, undertaking all of these actions could save ATO approximately $600 million per year. We have also found, in researching cost control efforts undertaken by international air navigation service providers, that consolidating regional administrative offices offers additional potential cost savings. While efforts to decommission navigational aids and close ATC facilities offer potential savings, we found that ATO lacks a consistent process for identifying the costs and benefits associated with these efforts. For example, although ATO reported saving $2.9 million in fiscal year 2005 by decommissioning 177 navigational aids, its report did not offset these savings with the costs of decommissioning activities, such as real property disposition (including buildings or real property leases, standby power systems, and fuel storage tanks), site cleanup, and restoration. Experts estimate that the costs of decommissioning all possible navigational aids and conducting the needed environmental remediation could total about $300 million. Opportunities may exist for ATO to reuse these sites to reduce or eliminate environmental cleanup costs. For example, sites could be used for cell phone towers, generating about $100,000 per year in revenue per site. Other sites could be leased as warehouses. Together, these efforts could potentially save FAA up to $14 million per year. However, without a transparent and verifiable process for determining both the costs and benefits, it remains difficult to accurately determine financial savings. As ATO proceeds with these efforts, stakeholders caution that decommissioning navigational aids and closing facilities should entail comprehensive risk mitigation to ensure that ATO retains adequate safety levels. This includes risk prevention, which focuses on elements that the agency can prevent, and risk recovery, which recognizes that some events cannot be prevented and the system must recover from them. It is important that facility closures happen within the context of a logical, well-documented, and reasoned process in consultation with congressional oversight committees. Any process to determine closures or consolidations should use consistent, accurate data collection and a common analytical framework to ensure the integrity of the process. ATO is also attempting to examine existing service contracts to better control costs. For example, it has saved about $2 million by renegotiating task orders and modifying contracts for technical assistance provided by contractors that manage facilities and equipment projects. According to ATO, these renegotiations did not affect the associated programs. In addition, ATO has saved about $1 million to date by negotiating cell phone contracts with four large service providers. Formerly, ATO employees arranged individual plans for their work cell phones. ATO also entered into a new contract with natural gas and electricity providers at its Technical Center that has saved about $358,000 to date. Lastly, through a strategic sourcing initiative, it has newly negotiated purchasing deals for support services, including printing and mail services, office equipment and supplies, and information technology hardware and software. As another cost-saving measure, ATO is exploring opportunities for outsourcing work that is now performed by the government. Under the Office of Management and Budget’s Circular A-76 (Revised), federal agencies can compete with and rely on the private sector to enhance productivity. Recently, FAA contracted with Lockheed Martin to operate its flight service stations. According to FAA, this contract will cost approximately $2.2 billion less over 10 years than FAA would have had to pay to operate the stations itself. FAA’s estimate includes the savings it expects to realize as the contractor assumes the costs of providing the services and paying their utility and maintenance costs. FAA is currently working to identify other opportunities to reduce costs through the A-76 process. Some experts have suggested that the time may be right for FAA to examine opportunities to contract out the ground portion of its FTI program, through which FAA is replacing air-ground telecommunications networks. According to these experts, this approach could save FAA up to $130 million a year beginning in fiscal year 2008. The FTI program is not expected to provide financial savings until fiscal year 2010; however, the savings might take longer to be realized because the program is falling behind schedule. ATO is working to control personnel costs through both attrition and improved productivity. According to ATO, these efforts have saved about $67 million from the beginning of fiscal year 2005 to date. For example, ATO has saved about $44 million from the attrition of both nonsafety and Flight Service staff. ATO further expects efficiencies and lower training costs to allow a 10 percent reduction in the controller workforce over the next decade. These efficiencies include relying on part-time employees and job-sharing arrangements, implementing split shifts, and improving the management of overtime through an optimal mix of increased staffing and overtime hours to meet workload demands. Through gains in air traffic controllers’ productivity, ATO has reduced its hiring requirements by about 460 controller positions, thereby avoiding salary costs of about $23 million, according to ATO. In addition, ATO is considering the feasibility of saving air traffic controller training costs by allowing graduates of its Air Traffic Collegiate Training Initiative (CTI) to bypass the FAA Academy, where FAA provides initial qualification training to new hires. According to an FAA Academy official, the proposal to allow these graduates to bypass the academy is being considered as part of a comprehensive review of the Collegiate Training Initiative that will be completed this fall. We had previously identified this effort as offering potential savings. ATO Faces Challenges in Funding NGATS As the organization primarily responsible for implementing NGATS, ATO will face substantial funding requirements beyond those needed to maintain the current system. Funding constraints have required ATO to carefully scrutinize capital projects and defer or eliminate funding for systems that could support NGATS, such as a precision-landing system augmented by satellites (LAAS), a digital e-mail-type communication system between controllers and pilots (CPDLC), and the next generation air/ground communication system (NEXCOM). Although the cost of NGATS is not yet known, JPDO and ATO are collaborating in developing rough near-term funding requirements for NGATS’s concept definition and development for major categories of air traffic control functions such as automation, communications, navigation, surveillance, and weather. While these funding requirements are not in FAA’s current 5-year spending plan, they could be included once JPDO presents, and FAA accepts, business cases, according to an FAA official. JPDO has identified some key factors that will drive NGATS costs. One of the drivers is the technologies expected to be included in NGATS. Some of these are more complex and thus more expensive to implement than others. A second driver is the sequence in which NGATS technologies will replace the technologies now in use. A third driver is the length of time required to transition to NGATS, since a longer transition period would impose higher costs. JPDO held the first in a series of investment analysis workshops to determine the basis for developing future NGATS cost estimates on April 25 and 26, 2006. This first workshop focused on recommendations from commercial and business aviation, equipment manufacturers, and systems developers. Resources Available to Support NGATS Could Be Enhanced through Leveraging and Funding Flexibility Resources available to support NGATS could be enhanced to the extent that JPDO leverages other partner agency resources. JPDO has already moved in this direction by conducting a review of its partner agencies’ research and development programs to identify ongoing work that could support NGATS and the potential for more effective interagency collaboration. Through this process, for example, JPDO successfully requested that FAA pursue funding to accelerate development of ADS-B and SWIM, which are two key systems identified for NGATS. However, JPDO officials told us that, while FAA did receive a funding increase for those systems, FAA did not receive the full amount it had requested. As noted, our past work on FAA’s national airspace modernization program has shown that receiving fewer resources than planned was one factor that contributed to delays in implementing technologies and significant cost increases. To further leverage resources for NGATS, JPDO has issued guidance to its partner agencies identifying areas that JPDO would like to see emphasized in the agencies’ fiscal year 2008 budget requests. JPDO is also working with the Office of Management and Budget to develop a systematic means of reviewing partner agency budget requests so that the NGATS-related funding in each request is easily identified. This includes a review of budgets submitted by the Department of Homeland Security for efforts by the Transportation Security Administration, and the Department of Commerce for efforts by the National Oceanic and Atmospheric Association. Such a process would help the Office of Management and Budget consider NGATS as a unified program rather than as disparate line items distributed across several agencies’ budget requests. Further enhancement to NGATS funding could be achieved by ATO utilizing its existing funding flexibility. Under existing law, ATO has a 3- year spending authority for Facilities and Equipment funds. It also has discretion to shift as much as 10 percent of a given program’s funds over a fiscal year. This is important, since annual expenditures for several large capital projects will soon be trending downward. Concurrently, FAA is working to conduct business case reviews of existing capital projects on an annual basis. These combined efforts could potentially yield hundreds of millions of dollars to pursue initial NGATS projects. Concluding Observations ATO has put mechanisms in place to change the culture and business processes that have plagued the past modernization efforts of FAA. ATO’s new cost accounting system and management practices are important steps toward improved accountability. Similarly, it has taken steps, in response to our recommendations, to improve its acquisition processes. However, as I mentioned, ATO faces challenges in sustaining and furthering its transformation to a results-oriented culture, and in many cases, it is still too early to judge the long-term success of these attempts at fundamental organizational change. ATO must continue to measure its progress and work to change the culture at all levels of the organization, as our work has shown that these types of transformations can sometimes take close to a decade to truly become entrenched within the organization. We believe that, overall, ATO is moving in the right direction, and we will continue to monitor its progress. We also believe that JPDO is moving in the right direction in creating an organizational structure that facilitates the federal interagency collaboration that must occur for the office to be successful in its mission. JPDO is working to leverage the various human, technological, and financial resources of its partner agencies. This is key given the coordinating role of JPDO and its lack of authority over key resources needed to continue developing the NGATS plan. However, because of this lack of authority, JPDO could be challenged to maintain partner agency and stakeholder commitment to the NGATS effort in the long term. Also, much of the NGATS planning and implementation depends on the development of the NGATS enterprise architecture. Although JPDO has said that a version of the enterprise architecture will be completed later this year, the architecture will require further refinement and commitment from the partner agencies into the future. Transforming the national airspace system to accommodate what is expected to be three times the current amount of traffic by 2025, providing adequate security and environmental safeguards, and doing these things seamlessly while the current system continues to operate, will be an enormously complex undertaking. Both ATO and JPDO have been given difficult tasks in a difficult budgetary environment. Going forward, efforts to control costs and leverage resources will become even more critical. Success also depends on the ability of ATO and JPDO to define their roles and form a collaborative environment for planning and implementing the next generation system. Mr. Chairman, this concludes my statement for the record. Contact and Staff Acknowledgments For further information on this statement for the record, please contact Gerald Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this statement include Nabajyoti Barkakati, Christine Bonham, Jay Cherlow, Elizabeth Eisenstadt, Colin Fallon, David Hooper, Heather Krause, Elizabeth Marchak, Maren McAvoy, Edmond Menoche, Faye Morrison, Richard Scott, Sarah Veale, and Matthew Zisman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over a decade ago, GAO listed the Federal Aviation Administration's (FAA) effort to modernize the nation's air traffic control (ATC) system as a high-risk program because of systemic management and acquisition problems. Two relatively new offices housed within FAA--the Air Traffic Organization (ATO) and the Joint Planning and Development Office (JPDO)--are now primarily responsible for planning and implementing these modernization efforts. Congress created ATO to be a performance-based organization that would improve both the agency's culture, structure, and processes, and the ATC modernization program's performance and accountability. Congress created JPDO, made up of seven partner agencies, to coordinate the federal and nonfederal stakeholders necessary to plan a transition from the current air transportation system to the "next generation air transportation system" (NGATS). This statement is based on GAO's recently completed and ongoing studies of the ATC modernization program. GAO provides information on (1) the status of ATO's efforts to improve the ATC modernization program, (2) the status of JPDO's planning efforts for NGATS, and (3) actions to control costs and leverage resources for ATC modernization and the transformation to NGATS. ATO has taken a number of steps as a performance-based organization to improve the ATC modernization program, but continued management attention will be required to institutionalize these initiatives. ATO has adopted core values, streamlined its management, and begun to revise its acquisition processes to become more businesslike and accountable. For the past 2 years, ATO has met its major acquisition performance goals. ATO still faces challenges, including sustaining its transformation to a results-oriented culture, hiring and training thousands of air traffic controllers, and ensuring stakeholder involvement in major system acquisitions. JPDO has made progress in planning for NGATS by facilitating collaboration among federal agencies, ensuring the participation of federal and nonfederal stakeholders, addressing technical planning, and factoring global harmonization into its planning, but JPDO faces challenges in continuing to leverage the partner agencies' resources and in defining the roles and responsibilities of the various agencies involved. JPDO could find it difficult to sustain the support of stakeholders over the longer term and to generate participation from some key stakeholders, such as current air traffic controllers. JPDO has taken steps to develop an enterprise architecture (the blueprint for NGATS) and will have an early version later this year. The robustness and timeliness of this enterprise architecture are critical to many of JPDO's future NGATS planning activities. ATO has taken a number of actions to control costs and maximize capital funds, which will become increasingly important during the transition to NGATS. ATO has established cost control as one of its key performance metrics, developed a cost accounting system, and is using its performance management system to hold its managers accountable for controlling costs. ATO has developed a formal cost control program that includes, among other things, (1) conducting annual business case reviews for its capital programs, (2) decommissioning and consolidating ATC facilities, and (3) pursuing cost reduction opportunities through outsourcing. These cost control initiatives represent an important first step to improved performance but will require review and monitoring.
Background Unpaid Federal Tax Debt Owed As of September 30, 2012, the tax debt of individuals and businesses that owed the U.S. government was about $364 billion, according to the IRS. The tax-debt inventory is the sum of all taxes owed to the IRS at a particular point in time, including debts from the current year and debts from previous years that fall within the 10-year statute of limitations on collections. The inventory of tax debts comprises tax assessments that are not collected along with related penalty and interest charges. Federal taxes that are owed become tax debts when the tax is assessed but not paid. Millions of individual and business taxpayers owe billions of dollars in unpaid federal tax debts, and the IRS expends substantial resources trying to collect these debts. GAO, High-Risk Series: An Update, GAO-13-283 (Washington, D.C.: February 2013). workforce (approximately 312,000 individuals).contain a comparison of the delinquency rates of federal employees with the general population. Laws and Regulations Governing the Security- Clearance Process Passed in 2004, the Intelligence Reform and Terrorism Prevention Act (IRTPA) mandates the President to identify a single entity responsible for, among other things, directing the day-to-day oversight of investigations and adjudications for personnel security clearances throughout the U.S. government. Additionally, agencies may not establish additional investigative or adjudicative requirements without approval from the selected entity, nor may they conduct an investigation where an investigation or adjudicative determination of equal level exists. Executive Order 13467 (June 30, 2008) designates ODNI as the Security Executive Agent and assigns responsibility for developing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of investigations and adjudications related to determinations of eligibility for access to classified information or eligibility to hold a sensitive position. Executive Order 13467 also designates the Director of OPM as the Suitability Executive Agent responsible for developing and implementing uniform and consistent policies and procedures for investigations and adjudications related to determinations of suitability for federal employment, as well as eligibility for electronic and physical access to secure facilities.outlines a process for continuous evaluation of individuals that are determined to be eligible or currently have access to classified information. Continuous evaluation means reviewing the background of an individual who has been determined to be eligible for access to classified information (including additional or new checks of commercial databases, government databases, and other information lawfully available to security officials) at any time during the period of eligibility to determine whether that individual continues to meet the requirements for eligibility for access to classified information. Executive Order 12968 (August 4, 1995) authorized establishment of uniform security policies, procedures, and practices, including the Federal Investigative Standards used by investigators conducting security-clearance investigations. In December 2012, the Security and Suitability Executive Agents (ODNI and OPM) jointly issued a revised version of Federal Investigative Standards for the conduct of background investigations for individuals that work for or on behalf of the federal government. Security-Clearance Process Personnel security clearances are required for access to classified national-security information, which may be classified at one of three levels: confidential, secret, or top secret. A top-secret clearance is generally also required for access to Sensitive Compartmented Information or Special Access Programs. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national security. Unauthorized disclosure could reasonably be expected to cause (1) “damage,” in the case of confidential information; (2) “serious damage,” in the case of secret information; and (3) “exceptionally grave damage,” in the case of top-secret information. As shown in figure 1, to ensure the trustworthiness and reliability of personnel in positions with access to classified information, government agencies rely on a personnel security-clearance process that includes multiple phases: application, investigation, adjudication, and reinvestigation (where applicable, for renewal or upgrade of an existing clearance). The application phase. To determine whether an investigation would be required, the agency requesting a security-clearance investigation is to first conduct a check of existing personnel-security databases to determine whether there is an existing security-clearance investigation underway or whether the individual has already been favorably adjudicated for a clearance in accordance with current standards. If such a security clearance does not exist for that individual, a security officer from an agency is to (1) request an investigation of an individual requiring a clearance; (2) forward a personnel-security questionnaire (SF-86) to the individual to complete using OPM’s Electronic Questionnaires for Investigations Processing (e-QIP) system; (3) review the completed questionnaire; and (4) send the questionnaire and supporting documentation, such as fingerprints, to OPM or another designated investigative service provider. The investigation phase. OPM conducts a majority of the government’s background investigations; however, some agencies, such as State, are delegated to conduct their own background investigations. Federal investigative standards and agencies’ internal guidance are used to conduct and document the investigation of the applicant (see app. II). The scope of information gathered during an investigation depends on the level of clearance needed. For example, federal standards require that investigators collect information from national agencies, such as the Federal Bureau of Investigation, for all initial and renewal clearances. For an investigation for a confidential or secret clearance, investigators gather much of the information electronically. For an investigation for a top- secret clearance, investigators gather additional information through more-time-consuming efforts, such as traveling to conduct in-person interviews to corroborate information about an applicant’s employment and education. After the investigation is complete, the resulting investigative report is provided to the agency. The adjudication phase. Adjudicators from an agency use the information from the investigative report to determine whether an applicant is eligible for a security clearance. To make clearance-eligibility decisions, national policy requires adjudicators to consider the information against the 2005 Revised Adjudicative Guidelines for Determining Eligibility for Access to Classified Information. The adjudication process is a careful weighing of a number of variables, to include disqualifying and mitigating factors, known as the “whole-person” concept. When a person’s life history shows evidence of unreliability or untrustworthiness, questions can arise as to whether the person can be relied on and trusted to exercise the responsibility necessary for working in a secure environment where protecting national security is paramount. As part of the adjudication process, the adjudicative guidelines require agencies to determine whether a prospective individual meets the adjudicative criteria for determining eligibility, including personal conduct and financial considerations. If an individual has conditions that raise a security concern or may be disqualifying, the adjudicator must evaluate whether there are other factors that mitigate such risks (such as a good-faith effort to repay a federal tax debt). On the basis of this assessment, the agency may make a risk-management decision to grant the security-clearance eligibility determination, possibly with a warning that future incidents of a similar nature may result in revocation of access. The reinvestigation phase. Personnel cleared for access to classified information may have their clearance renewed or upgraded if determined necessary to the performance of job requirements. Reinvestigation covers the period since the previous investigation. Renewal of a clearance at the same level currently undergoes the above process every 10 years for secret or 5 years for top secret. Applicants for clearance upgrades undergo additional steps necessary to obtain the higher clearance level (such as a subject interview for a top-secret clearance). Improvements to the Federal Security-Clearance Process We have previously reported on issues related to the federal security- clearance process. For example, in 2005, we designated DOD’s personnel security-clearance program—which comprises the vast majority of government wide clearances—as a high-risk area. This designation continued through 2011 because of concerns regarding continued delays in the clearance process and security-clearance documentation, among other things.personnel security-clearance program as a high-risk area, DOD, in Since we first identified the DOD conjunction with Congress and executive-agency leadership, took actions that resulted in significant progress toward improving the processing of security clearances. Congress held more than 14 oversight hearings to help oversee key legislation, such as the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA), which helped focus attention and sustain momentum of the government-wide reform effort. In 2011, we removed DOD’s personnel-security clearance program from our high-risk list because of the agency’s progress in improving timeliness, development of tools and metrics to assess quality, and commitment to sustaining progress. In 2012, we found that OPM’s reported costs to conduct background investigations increased by about 80 percent, from about $600 million in fiscal year 2005 to almost $1.1 billion in 2011 (in fiscal year 2011 dollars). OPM’s background investigation program has several cost drivers, including investigation fieldwork and personnel compensation for OPM’s background-investigation federal workforce. OPM attributed cost increases to, in part, an increase in the number of top-secret clearance investigations, which involve additional field work and more comprehensive subject interviews and compliance with investigation timeliness requirements. More Than 8,000 Individuals Eligible for Security Clearances Owe about $85 Million in Federal Taxes; About Half Are on Payment Plans with the IRS About 240,000 employees and contractors of civilian executive-branch agencies, excluding known employees and contractors of DOD and intelligence agencies, had a federal security clearance or were approved for secret and top-secret clearances due to a favorable adjudication from April 1, 2006, through December 31, 2011. These consist of both initial investigations when an individual is applying for a clearance and reinvestigations when an individual is upgrading to a higher clearance level or renewing an existing clearance. About 8,400 of the 240,000 people (approximately 3.5 percent) had unpaid federal tax debt as of June 30, 2012, totaling about $85 million. The characteristics and nature of these 8,400 individuals with tax debt are discussed below. About half of the individuals are in a repayment plan with the IRS. According to IRS data, about 4,200 of these 8,400 individuals with tax debt had a repayment plan with the IRS to pay back their debt as of June 30, 2012. plan was approximately $35 million. The tax debt owed by those on a repayment About half of individuals with tax debt were federal employees. Repayment plans, or installment agreements, are monthly payments made to the IRS that allow individuals or entities to repay their federal tax debt over an extended period. time frame (April 1, 2006, to December 31, 2011), while the others were favorably adjudicated as eligible for a secret clearance. Most individuals accrued tax debt after clearance adjudication. Approximately 6,300 individuals (about 76 percent) accrued tax debts only after the issuance of the security clearance. Approximately 2,000 individuals (about 25 percent) accrued their tax debt before the approval for the security clearance. Age and amount of tax debt ranges widely. About 16 percent of the $85 million in unpaid federal taxes were delinquent more than 3 years, and approximately 6 percent of the unpaid federal taxes were delinquent more than 5 years. Further, the unpaid tax debt of each individual ranged from approximately $100 to over $2 million, and the median tax-debt amount owed by these individuals was approximately $3,800. In addition, we analyzed 13 nongeneralizable case examples—7 federal contractors and 6 federal employees from the DOE, DHS, and State—to determine whether existing investigative and adjudication mechanisms detected unpaid tax debt during the security-clearance process. In 8 of these 13 cases, the individual had a top-secret clearance, with the remaining 5 having secret clearances. In 5 of the 13 cases, the individual had a reinvestigation of the security clearance after the period of our analysis (April 1, 2006, to December 31, 2011). In 11 of these 13 cases, the individual’s tax debt accrued before the favorable adjudication of the security clearance. For all 11 of these cases, the tax debt was identified either through the initial investigation or through the reinvestigation. In 2 of the 13 cases, the individual’s tax debt accrued after the favorable adjudication of the security clearance, and no indication of the federal tax debt was found in the security clearance files. On the basis of our review of IRS records, our analysis found that 12 of the 13 individuals filed their tax returns late for at least 1 tax year, and 6 of the 13 individuals did not file at least one annual tax return.discussed throughout the report. Federal Agencies Have Mechanisms to Detect Tax Debt, but Opportunities Exist to Strengthen Detection Capabilities To detect federal tax debt for clearance applicants, consistent with federal law, federal investigators rely primarily on two methods: (1) applicants self-reporting tax debts, and (2) validation techniques, such as the use of credit reports or in-person interviews. Each of these methods has shortcomings in detecting unpaid federal tax debts for clearance applicants. Moreover, federal agencies do not routinely monitor individuals after the security clearance is favorably adjudicated to identify tax debt accrued subsequent to the clearance approval. Additional mechanisms that provide large-scale, routine detection of federal tax debt could improve federal agencies’ ability to detect tax debts owed by security-clearance applicants and current clearance holders. Investigation Mechanisms Include Self-Reporting and Validation Techniques Self-Reporting As part of the application-submission phase of the security-clearance process, applicants must submit various background and biographical information using OPM form SF-86. In addition, new federal employees typically complete the Declaration for Federal Employment form (OF- 306). Both of these forms require applicants to disclose if they are delinquent on any federal debt, including tax debts. The SF-86 is used in conducting background investigations, reinvestigations, and continuous evaluations of federal employees and contractors. The SF-86 requires that applicants declare if they did not file or pay any federal taxes within the past 7 years. The SF-86 also requires applicants to disclose whether any liens were placed on their property for failure to pay tax debts and whether they are currently delinquent on any federal debts (including tax debts). Similar to the SF-86, the Declaration for Federal Employment requires applicants to disclose if they are delinquent on any federal debt, including tax debts. If the applicant is delinquent, the applicant is required to disclose, among other things, the type and amount of debt and any steps taken to repay the debt. An excerpt of the SF-86 where applicants are required to disclose any tax issues are illustrated in figure 2. Of the 13 individuals we examined from our nongeneralizable sample, 11 had accrued debt prior to the clearance being granted. Our review of the SF-86 documentation for this sample of 11 selected cases found that 5 individuals did not properly disclose their tax debts. Each of these individuals owed at least $12,000 at the time of our review. As discussed later in this report, our past work has focused on the inadequacies of relying on self-reported information without independent verification and review. During the investigative phase, the investigative agency can perform several activities in an effort to validate applicants’ certifications about the nature and extent of their tax debts, but each of the techniques has limitations, as discussed below. Obtaining credit reports of the applicants. According to OPM officials, credit reports, which contain public records including federal tax liens, are the primary method of identifying federal tax debts that were not self- reported. However, credit reports only contain information on tax debts for which the IRS filed a lien on the debtor’s property, both real and personal, for the amount of the unpaid tax. Circumstances do not warrant a lien being filed in all cases, such as when the amount of the debt is in dispute or when the IRS determines that filing a lien would hamper collection of the debt because the debtor is trying to obtain a loan to pay it off. In addition, the IRS generally does not file liens until after the debt has moved out of the notice status and there is property on which a lien can be placed. The amount owed can increase with interest and penalties or can decrease as the debtor makes payments, but neither change is reflected in the recorded tax lien amount. Our analysis found that about 450 of the approximately 8,400 delinquent taxpayers (about 5 percent) who were favorably adjudicated as eligible for security clearances had a tax lien filed on them. For the 13 cases that we reviewed, 4 of the cases identified during the investigative process had tax liens filed against the individuals. Conducting in-person interviews. As part of the investigation, investigators may conduct interviews with the applicant and his or her friends, former spouses, neighbors, and other individuals associated with the applicant. According to OPM, during the course of the in-person interviews, the tax debt could be disclosed, but there is no systematic way to identify tax debt during the interviews. For example, according to State officials, state tax debt is usually an indicator that the individual also owes federal taxes. Thus, during the course of their in-person interviews, investigators will often inquire with the applicant whether he or she owes federal taxes when a state tax debt is discovered. However the in-person interviews can be a time-consuming and resource-intensive process, and OPM does not have assurance that it identifies all tax debt information through the interview process. Agencies Do Not Routinely Monitor Current Clearance Holders for Tax Debt Federal agencies generally do not have routine mechanisms to review federal tax compliance for individuals who hold security clearances. Specifically, there is no process to detect unpaid federal tax debts accrued after an individual has been favorably adjudicated as eligible for a security clearance unless it is self-reported, reported by a security manager due to garnishment of wages, or discovered during a clearance reinvestigation (renewal) or upgrade. Given that individuals who hold security clearances are reinvestigated every 10 years for secret clearances and every 5 years for top-secret clearances, if an individual accrues tax debt after a security clearance is granted, the unpaid federal tax debt may not be detected for up to 5-10 years. As previously discussed, in 2 of our 13 case studies, the individuals’ tax debt accrued after the favorable adjudication of the security clearance, and we found no indication that the federal tax debt was identified in the security- clearance file. In addition, if the tax debt is not found in the initial investigation, the federal agency may not detect the tax debt until the next security clearance reinvestigation. In 5 of the cases that we reviewed, existing federal tax debt was not identified in the original adjudication of the security clearance but through the subsequent reinvestigation of the security clearance, meaning the individuals had tax debt unknown to the federal agency while holding a clearance for some period of time. This gap represents a risk that could be mitigated by a mechanism to routinely obtain tax-debt information, as discussed later in this report. Opportunities Exist to Improve Detection of Tax Debt Owed by Security- Clearance Applicants and Clearance Holders Additional mechanisms that provide large-scale detection of federal tax debt could improve federal agencies’ ability to detect tax debts owed by security-clearance applicants and security-clearance holders, but statutory privacy protections limit access to this information. Specifically, access to the federal tax information needed to obtain the tax payment status of applicants is restricted under section 6103 of the Internal Revenue Code, which generally prohibits disclosure of taxpayer data to federal agencies and others, including disclosures to help validate an applicant’s certifications about the nature and extent of his or her tax debt. During our interviews, ODNI, DHS, DOE, and State officials expressed interest in establishing additional mechanisms to provide large-scale detection of unpaid tax debt owed by security-clearance applicants. ODNI officials stated that they formed a working group in 2012, in collaboration with OPM and other federal agencies, to, among other things, explore whether an automated process for reviewing federal tax compliance can be established. However, restrictions to taxpayer information under section 6103 may present challenges to their efforts. For example, in 2011 and 2012, State requested the IRS to provide it a listing of State employees who owed federal taxes. The IRS did not formally respond in writing to the State letters, according to State officials, but stated that the IRS could not provide them this list due to section 6103 restrictions. According to IRS officials, based on their analysis of the applicable tax laws, IRS cannot disclose tax information of federal employees without taxpayer consent request. Federal agencies may obtain information on federal tax debts directly from the IRS if the applicant provides consent. For example, agencies can use IRS form 4506-T, Request for Transcript of Tax Return, to obtain tax transcripts that provide basic taxpayer information, including marital status, type of return filed, adjusted gross income, taxable income, and later adjustments, if any, if the individual provides written consent. However, this form may not be useful in conducting routine checks with the IRS during the initial investigation and reinvestigation processes for three reasons. First, the use of the IRS form 4506-T is a manual process and thus it is not conducive to the large-scale detection of unpaid federal taxes owed by security-clearance applicants, according to OPM, DHS, and State officials. Instead, this method is typically performed when a federal tax debt is disclosed by the applicant or discovered during the investigation. Second, the IRS form 4506-T generally provides limited visibility into an applicant’s overall tax debt status because the form requires the requesting agency to identify the specific tax modules (generally, time periods) that the agencies are requesting to be disclosed, and, as such, agencies may not obtain the complete tax debt history of the individual. Finally, the IRS form 4506-T has a 120-day time limit from date of the applicant’s signature providing consent to process the form with the IRS. Officials from State stated that this limited time frame could hinder their ability to obtain the requested tax information if this form was provided at the time the security-clearance application was completed. As highlighted in our past work, it is important that the establishment of any federal tax-compliance check not delay the timeliness of security- clearance decisions. Specifically, timeliness concerns were one of the reasons that we designated the security-clearance process as high risk from 2005 to 2011. As we concluded in July 2012, delays in the security-clearance process could pose risks to national security, impede the start of classified work and hiring the best-qualified workers, and increase the government’s cost of national-security-related contracts. The Department of the Treasury’s Offset Program (TOP), or a similar mechanism, may provide an opportunity for federal agencies to perform an automated check of both security-clearance applicants and current clearance holders to determine whether they have unpaid federal debts that would include tax debts, while not violating IRS section 6103 requirements. TOP is an automated process administered by the Department of the Treasury in which certain federal payments, such as contractor and federal salary payments, are reduced to collect certain delinquent tax and nontax debts owed to federal agencies, including the IRS. Each week, the IRS sends the Department of the Treasury’s Bureau of the Fiscal Service office (Fiscal Service) an extract of its tax- debt files, which are uploaded into TOP and matched against Fiscal Service payment data (such as federal contractor payments, federal salary payments, and Social Security Administration retirement payments). If there is a match and the IRS has completed all statutory notifications, any federal payment owed to the debtor is reduced (levied) to help satisfy the unpaid federal taxes. As we concluded in our past work, since TOP comingles information regarding tax debt and nontax debt, the existence of an employee’s name in TOP would generally not be considered taxpayer information subject to section 6103 of the tax code. Thus, TOP could be used to identify individuals who may owe federal debts, which includes federal taxes, without compromising privacy protections provided by section 6103. TOP currently reports a federal debt indicator (comprising both federal tax and nontax debts) to the System for Award Management (SAM) on whether a federal contractor or grant recipient has federal debts. SAM is a government-wide database used to track the status of agency procurements. As of September 2012, the IRS had referred approximately $167 billion (approximately 44 percent) of the $373 billion of the total unpaid tax assessment inventory in tax debts to TOP; thus, this program is an important repository of tax-debt data for federal workers and contractors. The IRS typically sends the tax debts to TOP except in cases where (1) the IRS has not completed its notification process,(2) tax debtors have filed for bankruptcy protection or other litigation, (3) tax debtors have agreed to pay their tax debt through monthly installment payments or have requested to pay less than the full amount owed through an offer in compromise, (4) the IRS determined that the tax debtors are in financial hardship, (5) tax debtors are filing an amended return, or (6) the IRS determined that specific circumstances (such as a criminal investigation) exist that warrant special exclusion from FPLP. Thus, debts that are typically sent to TOP are those where the taxpayer has not shown a willingness to resolve his or her tax debts. As such, it is important that these individuals are identified because it can be an important factor in determining whether an individual should be eligible for a security clearance, as inability or unwillingness to satisfy debts is a potentially disqualifying factor according to the adjudicative guidelines. Our analysis found that 1,600 (approximately 20 percent) of the 8,400 taxpayers that had been granted security clearances during this 5-year period had tax debts that were referred to TOP. A mechanism similar to or using TOP could be useful in identifying individuals who have not shown a willingness to resolve their tax debts and who are applying for a security clearance or already have one. This type of mechanism can be especially advantageous in monitoring clearance holders to identify circumstances (such as the nonpayment of federal taxes) that might warrant a reevaluation of an individual’s security- clearance eligibility.individuals (approximately 76 percent) had their tax debt accrued after the approval for the security clearance. As discussed earlier, our analysis found that 6,375 While ODNI officials reported forming a working group with OPM and other federal agencies to explore an automated process for reviewing federal tax compliance, ODNI, IRS, and Fiscal Service officials stated that they have not explored the use of TOP for identifying individuals who owe federal debts, including tax debts. ODNI officials stated that they would be supportive of processes that automatically checked security-clearance applicants for federal tax debts. Fiscal Service officials stated that they did not foresee any potential operational issues with using TOP more broadly for these purposes. However, IRS and Fiscal Service officials stated that a legal analysis would need to be performed to determine if the TOP information could be used for the purpose of performing background investigations. Separate from TOP, agencies may determine that a change in law is required to access taxpayer information without having to get consent from the individual. If it is determined that a change in law would be required, the IRS and federal agencies could consider various factors in determining whether they should seek legislative action for disclosing taxpayer information as part of the security-clearance process. Specifically, as we concluded in December 2011, it is important that Congress consider both the benefits expected from a disclosure of federal tax information and the expected costs, including reduced taxpayer privacy, risk of inappropriate disclosure, and negative effects on tax compliance and tax-system administration. While knowingly making false statements on federal security-clearance forms is a federal crime and may deter some from lying about their tax debt, much of our prior work has focused on the inadequacies of using voluntary, self-reported information without independent verification and review. detection and monitoring component of our agency’s fraud-prevention framework and is a fraud-control best practice. Routinely obtaining federal debt information from the Department of the Treasury would allow investigative agencies to conduct this independent validation. See, for example, GAO, Recovery Act Tax Debtors Have Received FHA Mortgage Insurance and First-Time Homebuyer Credits, GAO-12-592 (Washington, D.C.: May 29, 2012); Service-Disabled Veteran-Owned Small Business Program: Governmentwide Fraud Prevention Control Weaknesses Leave Program Vulnerable to Fraud and Abuse, but VA Has Made Progress in Improving Its Verification Process, GAO-12-443T (Washington, D.C.: Feb. 7, 2012); and Energy Star Program: Covert Testing Shows the Energy Star Program Certification Process Is Vulnerable to Fraud and Abuse, GAO-10-470 (Washington, D.C.: Mar. 5, 2010). holders to identify tax debt accrued after the initial clearance has been approved without having to wait until the reinvestigation. Additionally, we found that some individuals misrepresented the nature of their tax debt to investigators and adjudicators. Reliance on self-reporting and the relatively time-consuming and resource-intensive investigative interviewing process presents vulnerabilities that may be mitigated by additional mechanisms to expedite the security-clearance process. A mechanism such as TOP may provide an opportunity for federal agencies to improve their identification of federal debts, including tax debts, owed by security-clearance applicants. Enhancing federal agencies’ access to tax-debt information for the purpose of both investigating and adjudicating security-clearance applicants, as well as ongoing monitoring of current clearance holders’ tax-debt status, would better position agencies to make fully informed decisions about eligibility. This could include further exploration, through the existing working group, of routinely accessing TOP, or otherwise developing a legislative proposal, in consultation with Congress, to authorize access to tax-debt information. Conclusions Complete and accurate information on the tax-debt status of those applying for federal security clearances is important in helping limit potential vulnerabilities associated with granting clearances to those who might represent a security risk. Additional mechanisms to help investigative agencies access this information could help federal agencies apply the adjudicative guidelines, which call for weighing an individual’s federal tax debt as it relates to an individual’s financial and personal conduct when making security-clearance determinations. OPM and ODNI are currently overseeing several efforts to improve the investigative and adjudication process, including the development of a working group to explore options for establishing an automated process for reviewing federal tax compliance. As part of this effort, exploring the feasibility of investigative agencies routinely obtaining tax-debt information from the Department of the Treasury, for the purposes of investigating and adjudicating clearance applicants, as well as to conduct ongoing monitoring of current clearance holders’ tax-debt status, could help determine how, if at all, mechanisms such as TOP could be leveraged to gain access to this information and enhance OPM’s ability to conduct investigations and federal agencies’ ability to assess clearance eligibility. If these methods are found to be impractical, developing a legislative proposal, in consultation with Congress, to authorize access to tax-debt information could address existing legal barriers to such information. Recommendation for Executive Action We recommend that, as part of its working group, the Director of National Intelligence, as the Security Executive Agent, in consultation with OPM and the Department of the Treasury, evaluate the feasibility of federal agencies routinely obtaining federal debt information from the Department of the Treasury’s TOP system, or a similar automated mechanism that includes federal taxes, for the purposes of investigating and adjudicating clearance applicants, as well as for ongoing monitoring of current clearance holders’ tax-debt status. If this is found to be impractical, ODNI should consider whether an exception to section 6103 is advisable and, if so, develop a legislative proposal, in consultation with Congress, to authorize access to tax-debt information. Agency Comments and Our Evaluation We provided a draft copy of this report to DHS, DOE, the IRS, ODNI, OPM, State, and the Department of the Treasury’s Fiscal Service for their review. Letters from DHS, ODNI, and OPM are reprinted in appendixes IV, V, and VI. Both ODNI and OPM concurred with our recommendation. In its response, ODNI stated that it will recommend that the working group consider routine access of TOP for purposes of investigating, adjudicating, and monitoring security-clearance holders and applicants. This action will likely address the recommendation we proposed. If the working group determines this action is not feasible, ODNI may want to consider drafting a legislative proposal to authorize access to tax-debt information. In addition, DHS, the IRS, and OPM provided technical comments on our draft, which we incorporated as appropriate. In e-mails received on August 12, 2013, August 9, 2013, and August 8, 2013, officials from DOE, State, and the Department of the Treasury’s Fiscal Service, respectively, said that they did not have any comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Homeland Security, the Secretary of Energy, the Director of National Intelligence, the Director of OPM, the Secretary of State, and the Secretary of the Treasury. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or LordS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Appendix I: Federal Investigative Standards The Office of Personnel Management (OPM), federal, or contract investigators conduct security-clearance investigations by using government-wide standards. OPM conducts background investigations for the majority of federal employees to determine their suitability for federal employment. OPM also conducts background investigations as part of the security-clearance process. For all these investigations, information that applicants provide on electronic applications is checked against numerous databases. Many investigation types contain credit and criminal-history checks, while top-secret investigations also contain citizenship, public-record, and spouse checks, as well as reference interviews and an in-person, Enhanced Subject Interview to gain insight into an applicant’s character. Although it is not standard, the Enhanced Subject Interview can also be triggered for lower-level investigations if an investigation contains issues that need to be resolved in accordance with the Federal Investigative Standards. Table 1 highlights the investigative components generally associated with the suitability, and with the secret and top-secret clearance levels. Appendix II: Revised Adjudicative Guidelines for Determining Eligibility for Access to Classified Information In making determinations of eligibility for security clearances, the national- security adjudicative guidelines require adjudicators to consider (1) guidelines covering 13 specific areas of concern; (2) adverse conditions or conduct that could raise a security concern and factors that may mitigate (alleviate) the condition for each guideline; and (3) general factors related to the whole person. First, the guidelines state that eligibility determinations require an overall common-sense judgment based upon careful consideration of the following 13 guidelines in the context of the whole person: allegiance to the United States; personal conduct, such as deliberately concealing or falsifying foreign influence, such as having a family member who is a citizen of a foreign country; foreign preference, such as performing military service for a foreign country; sexual behavior; alcohol consumption; drug involvement; emotional, mental, and personality disorders; outside activities, such as providing service to or being employed by a relevant facts when completing a security questionnaire; financial considerations; criminal conduct; security violations; misuse of information-technology systems. Second, for each of these 13 areas of concern, the guidelines specify (1) numerous significant adverse conditions or conduct that could raise a security concern that may disqualify an individual from obtaining a security clearance; and (2) mitigating factors that could allay those security concerns, even when serious, and permit granting a clearance. For example, the financial consideration guideline states that individuals could be denied security clearances on the basis of having a history of not meeting financial obligations. However, this security concern could be mitigated if one or more of the following factors were present: the behavior was not recent, resulted from factors largely beyond the person’s control (such as loss of employment), or was addressed through counseling. Third, the adjudicator should evaluate the relevance of an individual’s overall conduct by considering the following general factors: the nature, extent, and seriousness of the conduct; the circumstances surrounding the conduct, to include knowledgeable participation; the frequency and recency of the conduct; the individual’s age and maturity at the time of the conduct; the voluntariness of participation; the presence or absence of rehabilitation and other pertinent behavioral changes; the motivation for the conduct; the potential for pressure, coercion, exploitation, or duress; and the likelihood of continuation or recurrence. When the personnel-security investigation uncovers no adverse security conditions, the adjudicator’s task is fairly straightforward because there is no security condition to consider. Appendix III: Scope and Methodology Our objectives were to determine: (1) how many individuals with unpaid federal taxes, if any, are in the Office of Personnel Management (OPM) security-clearance database and what is the magnitude of any unpaid federal tax debt? and (2) to what extent do federal agencies have mechanisms to detect unpaid tax debt during the security-clearance approval process? To determine the magnitude of unpaid federal taxes owed by individuals approved for a security clearance, we obtained and analyzed OPM data of individuals eligible for a secret or top-secret security clearance due to a favorable adjudication, either during an initial investigation or a reinvestigation, from April 1, 2006, to December 31, 2011. Our review did not include the review of confidential clearance holders or public-trust positions. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC) “Q” and “L” clearances are equivalent to the top- secret and secret clearances. Thus, for the purposes of our report, we considered “Q” and “L” clearances issued by DOE and NRC to be treated as top-secret and secret clearances, respectively. We identified the OPM Central Verification System (CVS) database as the appropriate data for our analysis after meeting with OPM officials and discussing the types of data available. We used this time frame because prior to April 1, 2006, the provision of the date a clearance was granted was not required and was therefore not consistently available for analysis. OPM provided us with an extract of the OPM database that included information only on executive-branch, non–Department of Defense (DOD), and non- intelligence-community employees and contractors who were eligible for a clearance during our time frame. The OPM CVS database does not maintain information on the denial of security clearances on the basis of an individual’s nonpayment of federal taxes. Thus, we were not able to determine the number of individuals who were denied security clearances for this reason.To determine the extent to which individuals eligible for a security clearance had unpaid federal taxes, we used the taxpayer identification number (TIN) as a unique identifier and electronically matched the Internal Revenue Service’s (IRS) tax-debt data to the OPM data of individuals eligible for a security clearance. Specifically, we used the IRS Unpaid Assessment file as of June 30, 2012, to match against the OPM CVS data. The IRS Unpaid Assessment file used for our analysis contains all tax modules that are unpaid as of June 30, 2012. The June 30, 2012, file was used because it contained the most-recent unpaid assessment information at the time we conducted our analysis. To avoid overestimating the amount owed and to capture only significant unpaid federal taxes, we excluded from our analysis tax debts meeting specific criteria to establish a minimum threshold in the amount of tax debt to be considered when determining whether a tax debt is significant. The criteria we used to exclude tax debts are as follows: (1) unpaid federal taxes the IRS classified as compliance assessments or memo accounts for financial reporting, and (2) recipients with total unpaid federal taxes of $100 or less. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded tax debts of $100 or less because the IRS considers it a de minimis amount. Additionally, for the purposes of our engagement, we included individuals from the Business Master File (BMF) and Non-Master File (NMF), who were an exact Social Security number (SSN)/TIN match, as well as an exact name match with the OPM CVS data. We only included exact matches so we would be able to match and use the information from the OPM CVS database. Additionally, the Spouse Individual Master File (IMF) was not used in the magnitude analysis. This is because we were unable to determine which agency each spouse was affiliated with on the basis of the information available in both the IRS and OPM data. Additionally, the debt of a spouse may not have any effect on the security-clearance adjudication determination. As a result, the results reported may be understated. Using these criteria, we identified about 8,400 of these individuals who had unpaid federal tax debt as of June 30, 2012. Our final estimate of tax debt does include some debt that is covered under an active IRS installment plan or beyond normal statutory time limits for debt collection. Our analysis determined the magnitude of known unpaid federal taxes owed by individuals in the OPM database and cannot be generalized to individuals that were granted eligibility for security clearances by DOD, the legislative branch, or the intelligence community. To determine whether existing investigative and adjudication mechanisms detect unpaid tax debt during the security-clearance process and possible additional improvements to federal tax-debt detection mechanisms, we interviewed knowledgeable officials from the Office of the Director of National Intelligence (ODNI), which serves as Security Executive Agent for the federal government and has authority and responsibility over security-clearance protocols, and from OPM, which conducts security- clearance investigations for most federal agencies. In addition, we conducted interviews with officials from the Department of Homeland and Security (DHS), Department of Energy (DOE), and Department of State (State). DHS, DOE, and State were selected because these agencies had the highest number of security clearances adjudicated from April 1, 2006, to December 31, 2011, collectively representing over 50 percent of clearances granted in OPM’s CVS database, and also represented over 50 percent of the tax debt owed. In addition, we also reviewed and analyzed applicable laws, regulations, and ODNI guidance, as well as applicable policies and procedures for OPM, DHS, DOE, and State regarding the investigation and adjudication of security clearances. Finally, we conducted interviews with the Department of the Treasury’s Bureau of the Fiscal Service (Fiscal Service) and the IRS to obtain their views on any initiatives and barriers in sharing tax-debt information. We compared verification mechanisms with the fraud control framework we developed in our past work and other fraud control best practices. We also used Federal Investigative Standards (see app. I) and the Adjudicative Guidelines for Determining Eligibility for Access to Classified Information (see app. II) to evaluate the current mechanisms used to identify and evaluate unpaid federal tax debt as part of the security- clearance process. To develop case-study examples, we identified a nonprobability sample of 13 individuals for detailed reviews from the above analyses of security- clearance holders from DHS, DOE, and State who had federal tax debt. We stratified our matches using the following characteristics: (1) adjudicating agency; (2) amounts of unpaid federal taxes in the IRS Unpaid Assessment database as of June 30, 2012; (3) type of security clearance granted or approved, clearance date, and dollar amount of unpaid tax debt; and (4) whether tax debt was recorded prior to or after the security-clearance grant date. We selected 12 cases from these four strata. Additionally, we randomly selected one case with indications that IRS was assessing a trust-fund recovery penalty. Once the nonprobability sample was selected, we requested all investigative and adjudicative case-file notes from the adjudicating agency; IRS notes, detailed account transcripts, and other records from the IRS; and security-clearance files from DHS, DOE, and State for these 13 individuals. For 2 of the 13 individuals that had accrued debt only after favorable adjudication, we reviewed the adjudicative files to determine whether the agency was aware of the federal tax debt through its reinvestigation. The clearance files and IRS paperwork were systematically reviewed using a structured data-collection instrument, looking at whether the tax debt was revealed in the investigative or adjudicative processes, and, if so, how it was handled in the adjudication. Each case file was independently reviewed by two analysts. After completion of each case review, the files were compared to identify discrepancies. Potential discrepancies between case-file reviews were resolved by a third-party reviewer. These cases were selected to illustrate individuals with unpaid federal tax debt that had security clearances but the results cannot be generalized beyond the cases presented. Data-Reliability Assessment To assess the reliability of record-level IRS unpaid assessments data, we used the work we perform during our annual audit of the IRS’s financial statements and interviewed knowledgeable IRS officials about any data- reliability issues. While our financial-statement audits have identified some data-reliability problems associated with tracing the IRS’s tax records to source records and including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address this report’s objectives. To assess the reliability of record-level OPM security-clearance data, we reviewed documentation from OPM, interviewed OPM officials who administer these information systems, and performed electronic testing of required elements. We determined that the data were sufficiently reliable to identify the individuals eligible for clearances with unpaid federal tax debt and select cases to illustrate potential vulnerabilities. We conducted this performance audit from November 2011 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our audit findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Comments from the Department of Homeland Security Appendix V: Comments from the Office of the Director of National Intelligence Appendix VI: Comments from the Office of Personnel Management Related GAO Products Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Medicaid: Providers in Three States with Unpaid Federal Taxes Received over $6 Billion in Medicaid Reimbursements. GAO-12-857. Washington, D.C.: July 27, 2012. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. Recovery Act: Tax Debtors Have Received FHA Mortgage Insurance and First-Time Homebuyer Credits. GAO-12-592. Washington, D.C.: May 29, 2012. Recovery Act: Thousands of Recovery Act Contract and Grant Recipients Owe Hundreds of Millions in Federal Taxes. GAO-11-485. Washington, D.C.: April 28, 2011. Federal Tax Collection: Potential for Using Passport Issuance to Increase Collection of Unpaid Taxes. GAO-11-272. Washington, D.C.: March 10, 2011. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Tax Compliance: Federal Grant and Direct Assistance Recipients Who Abuse the Federal Tax System. GAO-08-31. Washington, D.C.: November 16, 2007. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-17. Washington, D.C.: November 14, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07-1090T. Washington, D.C.: July 24, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07-563. Washington, D.C.: June 29, 2007. Tax Compliance: Thousands of Federal Contractors Abuse the Federal Tax System. GAO-07-742T. Washington, D.C.: April 19, 2007. Medicare: Thousands of Medicare Part B Providers Abuse the Federal Tax System. GAO-07-587T. Washington, D.C.: March 20, 2007.
As of October 2012, about 4.9 million civilian and military employees and contractors held a security clearance. Federal laws do not prohibit an individual with unpaid federal taxes from holding a security clearance, but tax debt poses a potential vulnerability. GAO was requested to review tax-debt detection during the clearance process. GAO examined (1) the number of individuals with unpaid federal taxes, if any, in the OPM security-clearance database and the magnitude of any federal tax debt, and (2) the extent to which federal agencies have mechanisms to detect unpaid tax debt during the security-clearance-approval process. GAO compared OPM's security-clearance information to the IRS's known tax debts. To provide examples, GAO conducted a detailed review of IRS and security adjudication files of 13 individuals selected, in part, on the basis of tax debt amount and type of security clearance. GAO also reviewed relevant laws and regulations and interviewed officials from the Office of the Director of National Intelligence (ODNI), Treasury, OPM, and three selected federal agencies that represented more than half of the clearance holders in OPM's database. About 8,400 individuals adjudicated as eligible for a security clearance from April 2006 to December 2011 owed approximately $85 million in unpaid federal taxes, as of June 2012. This represents about 3.4 percent of the civilian executive-branch employees and contractors who were favorably adjudicated during that period. GAO found that about 4,700 of the approximately 8,400 individuals were federal employees while the remainder was largely federal contractors. Additionally, about 4,200 of these individuals had a repayment plan with the Internal Revenue Service (IRS) to pay back their debt. For this review, GAO used clearance data from the Office of Personnel Management (OPM) Central Verification System (CVS) database. The CVS database does not maintain information on the denial of security clearances on the basis of an individual's nonpayment of federal taxes. Thus, GAO was not able to determine the number of individuals who were denied security clearances for this reason. Federal agencies have established mechanisms aimed at identifying unpaid federal tax debt of security-clearance applicants; however, these mechanisms have limitations. To detect federal tax debt for clearance applicants, federal investigators primarily rely on two methods: (1) applicants self-reporting tax debts; and (2) validation techniques, such as the use of credit reports or in-person interviews. Each of these methods has shortcomings in detecting unpaid federal tax debts of clearance applicants. For example, credit reports are the primary method for identifying tax debt that was not self-reported, but these reports only contain information on tax debts for which the IRS filed a lien on the debtor's property. According to GAO's analysis, 5 percent of the 8,400 delinquent taxpayers who were favorably adjudicated as eligible for security clearances had a tax lien filed on them. Additionally, federal agencies generally do not routinely review federal tax compliance of clearance holders. There is no process to detect unpaid federal tax debts accrued after an individual has been favorably adjudicated unless it is self-reported, reported by a security manager due to garnishment of wages, or discovered during a clearance renewal or upgrade. GAO's analysis found that 6,300 individuals (approximately 75 percent) accrued their tax debt after approval of the security clearance. Additional mechanisms that provide large-scale, routine detection of federal tax debt could improve federal agencies' ability to detect tax debts owed by security-clearance applicants and current clearance holders, but statutory privacy protections limit access to this information. Federal agencies may obtain information on federal tax debts directly from the IRS if the applicant provides consent. In addition, federal agencies do not have a mechanism, such as one that the Department of the Treasury (Treasury) uses, to collect delinquent federal debts. Such information could help federal agencies perform routine, automated checks of security-clearance applicants to determine whether they have unpaid federal debts, without compromising statutory privacy protections. Such a mechanism could also be used to help monitor current clearance holders' tax-debt status. Gaining routine access to this federal debt information, if feasible, would better position federal agencies to identify relevant financial and personal-conduct information to make objective assessments of eligibility for security-clearance applicants and continued eligibility of current clearance holders.
Background The IT acquisition process extends from the initial determination of needs to the final implementation of the acquired product. This report addresses three IT acquisition stages—presolicitation, solicitation, and source selection. During the presolicitation phase, contracting personnel develop specifications, prepare the acquisition plan, and apply for and receive a delegation of procurement authority (DPA). In the solicitation phase, agencies prepare and release the solicitation, respond to vendor questions, and close the solicitation. During source selection, agencies evaluate the proposals, may negotiate with vendors, call for best and final offers, and award the contract. Several laws and regulations, including the Brooks Act and the Warner Amendment, the Competition in Contracting Act (CICA), the Federal Acquisition Streamlining Act, the Federal Acquisition Regulation (FAR), and the Federal Information Resources Management Regulation (FIRMR), govern these three phases. The Brooks Act, 40 U.S.C. 759, gives GSA exclusive authority to procure IT and the power to delegate this authority by issuing a DPA to other federal agencies. GSA has given agencies a blanket delegation, usually $2.5 million, below which they can procure IT resources without requesting a specific DPA from GSA. GSA will raise or lower this blanket authority based on an agency’s history of acquiring IT. For any acquisition above the blanket delegation, agencies must obtain a DPA by submitting an agency procurement request (APR) to GSA. The Warner Amendment, 40 U.S.C. Section 759(a)(3)(C), exempts certain Defense IT procurements from the Brooks Act, and thus from the requirement to obtain procurement authority from GSA. These exempted procurements include those that support mission-critical, command and control, and intelligence activities. CICA, Public Law 98-369, establishes a policy of full and open competition for all federal procurements. The act requires that contracts with limited competition be formally justified. Such contracts are often awarded as sole source, compatibility limited, or limited to specific make and model. CICA also sets forth mechanisms for vendors to protest the government’s procurement actions. Through CICA, protests may be made to the agency, GSA’s Board of Contract Appeals (GSBCA) for IT resources, the General Accounting Office (GAO), the U.S. District Courts, or the U.S. Court of Federal Claims. Questions or objections having to do with certain small business or labor matters are reviewed by the Small Business Administration or the Department of Labor. The Federal Acquisition Streamlining Act, Public Law 103-355, was passed in 1994 to streamline the way the government buys goods and services, including IT. Among its provisions, the act authorized the simplified acquisition threshold at $100,000, established pilot programs to test alternative and innovative procurement techniques, promoted electronic commerce, encouraged the use of off-the-shelf purchases, and required contracting personnel to conduct more extensive debriefings to losing offerors. The executive branch is currently developing regulations to implement the act. The FAR is the body of procurement regulations that all executive agencies must follow when acquiring different types of supplies and services, including IT. The FIRMR, used in conjunction with the FAR, applies specifically to the acquisition, management, and use of IT resources. The following questions and answers provide information on the average time taken to complete the various steps in the IT acquisition process, as well as on other related factors. How Long Does It Take to Award Different Dollar Value IT Contracts? The average time to award IT contracts increases as the size of the contract increases. In our sample of four dollar thresholds, contract award time frames ranged from 158 to 669 days, as shown in figure 1 below. What Acquisition Steps Are Involved and Did Any One Consistently Take the Most Time? Contracts under $250,000 generally went through five basic steps: purchase requisition, presolicitation notice, release of the solicitation, closing of the solicitation, and contract award. For those between $250,000 and $25 million, one or two additional steps—the acquisition plan and best and final offer—were sometimes added. Those that were $25 million and above generally had the first five steps plus four additional steps: the APR, DPA, acquisition plan, and best and final offer. No one step consistently took the most time. The acquisition steps that took the longest also varied by dollar strata. Specifically, contracts from $25,000 to $250,000 and $2.5 million to $25 million took the longest average time from the presolicitation notice to the release of the solicitation (57 days (36 percent) and 98 days (29 percent), respectively). Procurements of $25 million and above took comparatively more time from the solicitation closing to receipt of the best and final offers (186 days or 28 percent). This latter period would generally include the time required to evaluate proposals and conduct discussions in these more complex procurements. Figure 2 provides details on the average time taken in awarding IT contracts for the four dollar strata. What Are the Two Most Common Types of Procurements and How Long Does It Take to Award Them? As noted earlier, CICA establishes a policy of full and open competition for all procurements unless an exception is specifically justified. In our sample, the two most common types of procurements were (1) sole source and (2) full and open competition. These two types made up 81 percent of the total number of contracts awarded and accounted for 83 percent of the total value of contracts in our sample. Sole source contracts from $25,000 to $250,000 averaged 150 days to award, while such contracts of $25 million and more took an average of 295 days. Fully competitive contracts in the lowest dollar strata averaged 184 days, and it took an average of 708 days to award them in the highest dollar strata. Figure 3 lists the time frames for both types of contracts by dollar strata. Are the Number of Contracts and Total Dollar Value Consistent Across All Dollar Strata? No. The smallest dollar strata had the largest number of contracts and the highest strata contained the most contract dollars. Table 1 shows the number of contracts and total dollars across each of the four dollar strata. What Major IT Resources Are Being Acquired and Are There Any Differences in the Time Taken to Acquire Them? Hardware, software, maintenance, and support services are the major types of IT resources being acquired. Although the four types of resources were acquired in about the same amount of time—5 and 8 months respectively for the two lowest dollar strata—some differences occurred at the two highest dollar strata. As shown in figure 4 below, contracts from $2.5 million to $25 million, where software was the primary purchase, took 579 days compared to about 357 and 375 days, respectively, for contracts that were primarily hardware and maintenance, and 284 days for support services. For contracts $25 million and over, hardware took an average of 780 days compared to about 565 days for both software and support services, and 338 days for maintenance. How Do Bid Protests Affect Contract Time? Protested contracts took longer to award than nonprotested contracts in every dollar strata. The increased time was most significant in the lowest and highest dollar strata. Protested contracts from $25,000 to $250,000 took, on average, 50 days longer (31 percent) than nonprotested contracts, and protested contracts of $25 million and more took, on average, 222 days longer (41 percent) than nonprotested contracts. Protested contracts took longer to award for a variety of reasons. In addition to the time taken to resolve the protests, other factors, such as competition type and evaluation method, can also increase the contract award time. As shown in figures 3 and 7, full and open contracts take longer to award than sole source contracts and best value contracts take longer to award than lowest cost contracts. Of the protested contracts that were $25 million and more, 85 percent were full and open and none were sole source. Also, almost 70 percent of those protested contracts used the best value evaluation method. Furthermore, large dollar contracts are much more likely to be protested. For example, while 44 percent of contracts $25 million and more were protested, only 3 percent of the small dollar contracts were protested. Figure 5 shows the differences in days to award protested and nonprotested contracts. How Long Does It Take to Award Contracts That Have Amended Solicitations? Contracting officers issue amendments to add, change, or clarify some aspect of the contract solicitation including the requirements, evaluation criteria, or closing date. For all dollar strata, contracts that had amended solicitations took longer to award than contracts without amended solicitations—ranging from an average of 45 days longer at the smallest dollar strata to 406 days longer at the largest dollar strata. Figure 6 shows the average days to award contracts with amended and unamended solicitations for each dollar strata. How Long Does It Take to Award Lowest Cost and Best Value Procurements? Lowest cost contracts are awarded to the offeror with the lowest-priced technically acceptable proposal. In best value contracts, the government may consider other factors, such as technical merit, along with cost in making the award. The time to award both lowest cost and best value contracts increased with the size of the dollar strata. Lowest cost contracts from $25,000 to $250,000 averaged 173 days to award, and such contracts $25 million and more took an average of 567 days. Best value contracts from $25,000 to $250,000 averaged 226 days compared to 777 days for contracts that were $25 million and more. Figure 7 shows how long it takes to award both types of contracts in each of the four dollar strata. How Long Does the DPA Process Take? The average time to receive a DPA increased as the amount of the contract increased. It took agencies about 30 to 40 days to receive DPAs for contracts under $2.5 million and 60 to 90 days for contracts $2.5 million and more. We calculated the time to receive a DPA from the day the agency’s contracting office approves the APR to the day GSA issues the DPA. (For some agencies, this period includes the time to send the APR from the bureau through the department level before sending it to GSA. We included this period because the contracting office must wait for GSA approval before proceeding with the contract.) GSA calculates the time to issue a DPA from the time it determines that an agency’s APR is acceptable for review until it issues the DPA. Under this method, GSA’s records show that GSA issued DPAs in an average of 13 days. Figure 8 provides our data showing how many days it takes agencies to receive DPAs by dollar strata. Do Warner Amendment Contracts Take Less Time Than Brooks Act Contracts and How Frequently Does DOD Use Them? Small dollar Warner Amendment contracts showed no appreciable time savings over Brooks Act contracts. However, large dollar Warner Amendment contracts were awarded an average of 6 months faster than Brooks Act contracts. DOD used the Warner exemption from the Brooks Act in over half of its IT procurements, which accounted for 26 percent of the total dollar value of its contracts. Table 5 lists the percent of contracts using the Warner exemption and the amount of time required to award DOD contracts. How Will the Federal Acquisition Streamlining Act Affect IT Acquisitions, and Can Our Survey Data Be Used as a Baseline to Measure Related Progress? While all aspects of the act will affect IT procurements, provisions that have the most potential for significantly expediting IT procurements include (1) establishing the simplified acquisition threshold at $100,000, (2) developing and implementing the Federal Acquisition Computer Network, which will provide a governmentwide electronic commerce capability, (3) revising requirements related to purchases of commercial products and services, and (4) requiring contracting officers to more extensively debrief losing offerors. All four of these provisions can help reduce the time it takes to procure IT. Our data could be used as a baseline to measure improvements in this area. For example, our baseline data show that contracts in the increased threshold range for simplified acquisitions ($100,000) took 149 days to award. Also, once the acquisition computer network is in place, our data could be used as a baseline to measure related improvements at all dollar levels. What Additional Research Is Suggested by Our Data? As addressed below, our data identified several factors that may lengthen acquisition time. These factors warrant further study to determine if they can be streamlined or eliminated without compromising the acquisition, and if they can be applied across all government agencies. What processes lengthen the time for high-dollar acquisitions and can they be reduced or eliminated? Can any acquisition steps be eliminated without sacrificing quality or critical federal objectives such as preferential treatment for small and disadvantaged businesses? Of the four major IT resources—hardware, software, maintenance, and support services—why does it take so much longer to acquire software in the $2.5 million to $25 million strata and hardware in the $25 million or more strata? Is the additional time and cost taken to award best value contracts warranted? Do they provide commensurate benefits to the government? Does GSA’s DPA process add value to procurements and do agencies receive the DPAs in a timely manner? Does the extra time taken to award Brooks Act contracts (those that are compared to Warner exempt) result in higher quality products and services? We also grouped our data according to the six agencies that awarded the most IT contracts (see appendix III). This information can be further researched to determine if these agencies have unique processes and procedures that can be adopted by other agencies. We discussed our methodology and the resulting individual agency statistics with agency officials from Army, Navy, Air Force, DOD, HHS, NASA, Treasury, and GSA. These officials agreed with our methodology and told us that the information contained in this report would be useful in helping them identify areas for further research and improvement. As arranged with your office, we are sending copies of this report to the Chairmen of the Senate Committee on Governmental Affairs and that committee’s Subcommittee on Oversight of Government Management, the Chairmen and Ranking Minority Members of the House Committee on Government Reform and Oversight, and the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations; the Director of the Office of Management and Budget; the Administrators of the General Services Administration and the National Aeronautics and Space Administration; the Secretaries of Defense, the Army, the Navy, the Air Force, Health and Human Services, and Treasury; and other interested parties. We will also make copies available to others upon request. Should you have any questions about this report, please contact me at (202) 512-6413. Other major contributors are listed in appendix IV. Objectives, Scope, and Methodology As you requested, we compiled data about the federal IT procurement process. In this report, we agreed to provide statistical data about the IT procurement process, including information about the time taken (1) to acquire IT within various dollar strata, (2) to complete sole source and full and open procurements, and (3) by DOD to acquire Brooks Act and Warner exempt contracts. Our data were limited to statistical information about the IT procurement process. We made no attempt to determine the appropriate amount of time required to complete the procurement steps or identify problems that may have lengthened the procurement time, since these issues were beyond the scope of this report. To obtain the IT procurement information, we developed and mailed questionnaires to the contracting personnel of 35 federal agencies. This mailing was based on a stratified random sample of IT contract award notices published in the Commerce Business Daily from January 1990 through September 1992—the most current data when we drew the sample. To ensure that we obtained factual data, we designed the questionnaire to require data from contract files and requested that the individual most familiar with the precontract award process, such as the procuring contracting officer, complete the questionnaire. The procurement process covered in our questionnaire began with the acquisition plan, purchase requisition, or presolicitation notice, whichever came first, and ended when the contract was awarded. We stratified the sample to include the six agencies with the most contract award notices during our sampling period—Army, Navy, Air Force, the Department of Health and Human Services (HHS), the National Aeronautics Space Administration (NASA), and the Department of the Treasury. These agencies constituted 75 percent of the contracts in the total sample population. We also included the categories of other DOD agencies and all other civilian agencies. We also stratified our sample by four dollar ranges: $25,000 to $250,000; $250,000 to $2,500,000; $2,500,000 to $25,000,000; and $25,000,000 and above. Unless noted otherwise, we presented the data as the arithmetic mean and the time as calendar days. To measure the time to conduct the acquisition process, we used the earliest point the contracting office was involved as the starting date and the date of contract award as the closing date. The data reflect discrete procurement events, such as receiving the purchase requisition or issuing the solicitation. We received an 81 percent response rate, which consisted of 2,720 contracts worth almost $16 billion. The sample was conducted at the 95 percent confidence level, with a maximum precision of plus or minus 5 percent at the agency and dollar level. Non-IT procurements, modifications to existing contracts, duplicate submissions, and interagency agreements were excluded from the sample, and joint awards were consolidated and considered as one procurement. To develop the questionnaire and identify pertinent procurement questions, we analyzed the GAO report Information Technology: A Model to Help Managers Decrease Acquisition Risks, as well as government procurement regulations such as the Federal Acquisition Regulation and the Federal Information Resources Management Regulations. To ensure that information for the questionnaire was available, we tested a draft questionnaire at three agencies using agency contract files. In addition, contracting officers, officials from the Office of Management and Budget’s Office of Information and Regulatory Affairs and Office of Federal Procurement Policy, and officials from GSA reviewed and commented on the questionnaire. We conducted nine pretests at the Department of the Army, Defense Information Systems Agency, National Institutes of Health, Library of Congress, Internal Revenue Service, Army Corps of Engineers, Environmental Protection Agency, Department of the Navy, and Department of Agriculture to ensure that contract officers could understand and answer the questions and that the questions applied to all types of IT procurements. We used the pretest results to finalize our questionnaire. We verified the data in five ways. First, we reviewed the returned questionnaires and called agency contract personnel in those instances where data in the questionnaire were not provided or the answers were unclear. Second, where the questionnaire data appeared on exception reports produced from the database, we examined the questionnaire and, if appropriate, called agency contract personnel to clarify. Third, we verified that an appropriate official had completed the questionnaire. Fourth, we verified 45 questionnaires selected by random sample by comparing the data given in the questionnaire with substantiating documents from the agency’s contract file. The accuracy rate of this verification was 96.4 percent. Fifth, we verified the protest decision data with attorneys from GAO and GSBCA. We discussed our methodology and the resulting individual agency statistics with officials from each of the six stratified agencies, DOD, and GSA. These officials agreed with our methodology and told us that the information contained in this report would be useful in helping them identify areas for further research and improvement. We conducted our work from April 1993 through January 1995, in accordance with generally accepted government auditing standards. IT Acquisition Statistics for Individual Agencies This appendix contains the individual statistics of the agencies we stratified in our sample. We have included these statistics to provide agencies with (1) information to identify areas for further research and (2) a baseline from which to measure any improvements. The data in this appendix are statistically valid and are stratified by both agency and dollar amount. All of the data in these charts represent a precision of no more than plus or minus fifteen percent, unless noted otherwise. The chart below lists the number of contracts in our sample for each agency and dollar strata. Figure II.2: Sole Source Contracts—Days to Award Contract by Agency The $250K to $2.5M category represents a maximum precision of plus or minus 22 percent. Figure II.3: Full and Open Competition Contracts—Days to Award Contract by Agency Figure II.4: IT Hardware Contracts—Days to Award by Contract Agency Figure II.5: IT Software Contracts—Days to Award by Contract Agency The $250K to $2.5M dollar strata represents a maximum precision of plus or minus 16 percent. Figure II.6: IT Maintenance Contracts—Days to Award by Contract Agency Figure II.8: Best Value Evaluation Method—Days to Award Contract by Agency Figure II.9: GSA’s DPA Process—Days to Receive a DPA by Agency Survey of Federal Information Technology Procurement Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Program Evaluation and Methodology Division Stuart Kaufman, Questionnaire Methodologist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
Pursuant to a congressional request, GAO reviewed federal information technology (IT) acquisitions to determine how various factors affect the IT acquisition process. GAO found that: (1) the average time taken to complete an IT acquisition varies according to the procurement type, dollar value, and whether a bid protest is filed; (2) hardware, software, maintenance, and support services are the major types of IT resources being acquired; (3) contracts under $250,000 take an average of 158 days to award, while contracts $25 million or more take 669 days to award; (4) most procurements are awarded either as sole source or full and open competition contracts; (5) the average time taken to award IT contracts increases as the contract value increases; (6) protested contracts take longer to award than nonprotested contracts in every dollar strata; (7) large dollar contracts are much more likely to be protested; and (8) other factors such as competition type and evaluation methods also increase contract award time.
Background Several federal legislative and executive provisions support preparation for and response to emergency situations. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act) primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance to state, local, and tribal governments, individuals, and qualified private nonprofit organizations. FEMA, within DHS, has responsibility for administering the provisions of the Stafford Act. Besides using these federal resources, states affected by a catastrophic disaster can also turn to other states for assistance in obtaining surge capacity—the ability to draw on additional resources, such as personnel and equipment, needed to respond to and recover from the incident. One way of sharing personnel and equipment across state lines is through the use of the Emergency Management Assistance Compact, an interstate compact that provides a legal and administrative framework for managing such emergency requests. The compact includes all 50 states and the District of Columbia. We have ongoing work examining how the Emergency Management Assistance Compact has been used in disasters and how its effectiveness could be enhanced and expect to report within a few months. The Homeland Security Act of 2002 required the newly established DHS to develop a comprehensive National Incident Management System (NIMS). NIMS is intended to provide a consistent framework for incident management at all jurisdictional levels regardless of the cause, size, or complexity of the situation and to define the roles and responsibilities of federal, state, and local governments, and various first responder disciplines at each level during an emergency event. It also prescribes interoperable communications systems and preparedness before an incident happens, including planning, training, and exercises. The act required DHS to consolidate existing federal government emergency response plans into a single, integrated and coordinated national response plan. DHS issued the National Response Plan (NRP), intended to be an all- discipline, all-hazards plan establishing a single, comprehensive framework for the management of domestic incidents where federal involvement is necessary. The NRP, operating within the framework of NIMS, provides the structure and mechanisms for national-level policy and operational direction for domestic incident management. The NRP also includes a Catastrophic Incident Annex, which describes an accelerated, proactive national response to catastrophic incidents. Developing the capabilities needed for large-scale disasters is part of an overall national preparedness effort that should integrate and define what needs to be done and where, how it should be done, and how well it should be done—that is, according to what standards. The principal national documents designed to address each of these are, respectively, the National Response Plan, the National Incident Management System, and the National Preparedness Goal. The interim National Preparedness Goal, required by Homeland Security Presidential Directive 8, is particularly important for determining what capabilities are needed, especially for a catastrophic disaster. All states and urban areas are to align existing preparedness strategies within the National Preparedness Goal’s eight national priorities. The December 2005 draft National Preparedness Goal defines both the 37 major capabilities that first responders should possess to prevent, protect from, respond to, and recover from a wide range of incidents and the most critical tasks associated with these capabilities. An inability to effectively perform these critical tasks would, by definition, have a detrimental affect on effective protection, prevention, response, and recovery capabilities. A final National Preparedness Goal is expected to be released soon. As the subcommittee is aware, beginning in February 2006, reports by the House Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina, the Senate Homeland Security and Governmental Affairs Committee, the White House Homeland Security Council, the DHS Inspector General, and DHS and FEMA all identified a variety of failures and some strengths in the preparations for, response to, and initial recovery from Hurricane Katrina. Collectively, these reports, along with GAO’s various reports and testimonies, offered a number of specific recommendations for improving the nation’s ability to effectively prepare for and respond to catastrophic disasters. Table 1 contains the resulting reports and a brief description of their findings. Enhanced Leadership, Capabilities, and Accountability Controls Will Improve Emergency Management After FEMA became part of DHS in March 2003, its responsibilities were over time dispersed and redefined. FEMA continues to evolve within DHS as it implements the changes required by the Post-Katrina Reform Act, whose details are discussed later. Hurricane Katrina severely tested disaster management at the federal, state, and local levels and revealed weaknesses in the basic elements of preparing for, responding to, and recovering from any catastrophic disaster. Based on work done during the aftermath of Hurricane Katrina, we previously reported that DHS needs to more effectively coordinate disaster preparedness, response, and recovery efforts, particularly for catastrophic disasters in which the response capabilities of state and local governments are almost immediately overwhelmed. Our analysis showed the need for (1) clearly defined and understood leadership roles and responsibilities; (2) the development of the necessary disaster capabilities; and (3) accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. Leadership Is Critical to Prepare for, Respond to, and Recover from Catastrophic Disasters In preparing for, responding to, and recovering from any catastrophic disaster, the legal authorities, roles and responsibilities, and lines of authority at all levels of government must be clearly defined, effectively communicated, and well understood to facilitate rapid and effective decision making. Hurricane Katrina showed the need to improve leadership at all levels of government to better respond to a catastrophic disaster. For example, there were problems with roles and responsibilities under the NRP and ambiguities about both what constituted an incident of national significance to trigger the NRP and what constituted a catastrophic incident to trigger the proactive response of the NRP’s Catastrophic Incident Annex. On May 25, 2006, DHS released changes to the NRP regarding leadership issues, such as which situations require secretarial leadership; the process for declaring incidents of national significance; and the scope of the NRP and its Catastrophic Incident Annex. The revised NRP clearly states that the Secretary of Homeland Security, who reports directly to the President, is responsible for declaring and managing incidents of national significance, including catastrophic ones. At the time of Katrina, the supplement to the catastrophic incident annex, which provides more detail on implementing the annex, was still in draft. Subsequent to Katrina, DHS published the final supplement to the Catastrophic Incident Annex, dated August 2006. The White House Homeland Security Council report included 44 recommendations that were intended for quick implementation, of which 18 were focused on improving and clarifying the legal authorities, roles and responsibilities, and lines of authority. DHS has provided limited information on the status of its implementation of the White House recommendations, although it has reported actions taken on some issues raised in the White House Homeland Security Council report and in other reports. For example, DHS has pre-designated Principal Federal Officials and Federal Coordinating Officers for regions and states at risk of hurricanes and described their respective roles in coordinating disaster response—which was a source of some confusion in the federal response to Hurricane Katrina. However, the changes may not have fully resolved the leadership issues regarding the roles of the principal federal officer and federal coordinating officer. While the Secretary of Homeland Security may avoid conflicts by appointing a single individual to serve in both positions in nonterrorist incidents, confusion may persist if the Secretary of Homeland Security does not exercise this discretion to do so. Furthermore, this discretion does not exist for terrorist incidents, and the revised NRP does not specifically provide a rationale for this limitation. Congress also raised concerns in 2006 that FEMA’s performance problems during the response to Hurricane Katrina may have stemmed from its organizational placement and its budgetary relationship within DHS. In May 2006, we noted that organizational changes alone, while potentially important, were not likely to adequately address the underlying systemic conditions that resulted in FEMA’s performance problems. We noted that a number of factors other than organizational placement may be more important to FEMA’s success in responding to and recovering from future disasters, including catastrophic ones. Conditions underlying FEMA’s performance during Hurricane Katrina involved the experience and training of DHS or FEMA leadership; the clarity of FEMA’s mission and related responsibilities and authorities to achieve mission performance expectations; the adequacy of it human, financial, and technological resources; and the effectiveness of planning, exercises, and related partnerships. The Post-Katrina Reform Act includes provisions that address each of these issues. Enhanced Capabilities for Catastrophic Response and Recovery Are Needed Numerous reports and our own work suggest that the substantial resources and capabilities marshaled by state, local, and federal governments and nongovernmental organizations were insufficient to meet the immediate challenges posed by the unprecedented degree of damage and the number of victims caused by Hurricanes Katrina and Rita. Developing the capabilities needed for catastrophic disasters should be part of an overall national preparedness effort that is designed to integrate and define what needs to be done and where, how it should be done, and how well it should be done—that is, according to what standards. The principal national documents designed to address each of these are, respectively, the National Response Plan, the National Incident Management System, and the National Preparedness Goal. The nation’s experience with Hurricanes Katrina and Rita reinforces some of the questions surrounding the adequacy of capabilities in the context of a catastrophic disaster—particularly in the areas of (1) situational assessment and awareness, (2) emergency communications, (3) evacuations, (4) search and rescue, (5) logistics, and (6) mass care and sheltering. Capabilities are built upon the appropriate combination of people, skills, processes, and assets. Ensuring that needed capabilities are available requires effective planning and coordination in conjunction with training and exercises in which the capabilities are realistically tested and problems identified and subsequently addressed in partnership with other federal, state, and local stakeholders. In recent work on FEMA management of day-to-day operations, we found that although shifting resources caused by its transition to DHS created challenges for FEMA, the agency’s management of existing resources compounded these problems. FEMA lacks some of the basic management tools that help an agency respond to changing circumstances. Most notably, FEMA lacks a strategic workforce plan and related human capital strategies—such as succession planning or a coordinated training effort. Such tools are integral to managing resources, as they enable an agency to define staffing levels, identify the critical skills needed to achieve its mission, and eliminate or mitigate gaps between current and future skills and competencies. FEMA officials have said they are beginning to address these and other basic organizational management issues. To this end, FEMA has commissioned studies of 18 areas, whose final reports and recommendations are due later this spring. In identifying available capabilities, FEMA needs to identify and assess the capabilities that exist across the federal government and outside the federal government. For example, in a recent report on housing assistance, we found that the National Response Plan’s annex covering temporary shelter and housing (Emergency Support Function--6) clearly described the overall responsibilities of the two primary responsible agencies— FEMA and the Red Cross. However, the responsibilities described for the support agencies—the Departments of Agriculture, Defense, Housing and Urban Development (HUD), and Veterans Affairs—did not, and still do not, fully reflect their capabilities. Further, these support agencies had not, at the time of our work, developed fact sheets describing their roles and responsibilities, notification and activation procedures, and agency- specific authorities, as called for by ESF-6 operating procedures. We recommended that the support agencies propose revisions to the NRP that fully reflect each respective support agency’s capabilities for providing temporary housing under ESF-6, develop the needed fact sheets, and develop operational plans that provide details on how their respective agencies will meet their temporary housing responsibilities. The Departments of Defense, HUD, Treasury, and the Veterans Administration, and Agriculture, concurred with our recommendations. The Red Cross did not comment on our report or recommendations. As part of a housing task force, FEMA is currently exploring ways of incorporating housing assistance offered by private sector organizations. Further, recent GAO work found that actions are needed to clarify the responsibilities and increase preparedness for evacuations, especially for those transportation-disadvantaged populations. We found that state and local governments are generally not well prepared to evacuate transportation-disadvantaged populations (ie. planning, training, and conducting exercises), but some states and localities have begun to address challenges and barriers. For example, in June 2006 DHS reported that only about 10 percent of the state and about 12 percent of the urban area emergency plans it reviewed adequately addressed evacuating these populations. Steps being taken by some such governments include collaboration with social service and transportation providers and transportation planning organizations—some of which are Department of Transportation (DOT) grantees and stakeholders—to determine transportation needs and develop agreements for emergency use of drivers and vehicles. The federal government provides evacuation assistance to state and local governments, but gaps in this assistance have hindered many of these governments’ ability to sufficiently prepare for evacuations. This includes the lack of any specific requirement to plan, train, and conduct exercises for the evacuation of transportation-disadvantaged populations as well as gaps in the usefulness of DHS’s guidance. We recommended that DHS should clarify federal agencies’ roles and responsibilities for providing evacuation assistance when state and local governments are overwhelmed. DHS should require state and local evacuation preparedness for transportation-disadvantaged populations and improve information to assist these governments. DOT should encourage its grant recipients to share information to assist in evacuation preparedness for these populations. DOT and DHS agreed to consider our recommendations, and DHS stated it has partly implemented some of them. Finally, the use of a risk management methodology—integrating systematic concern for risk into the normal cycle of agency decision making and implementation—should be central to assessing the risk for catastrophic disasters, guiding the development of national capabilities and the expertise that can be used to respond effectively to catastrophic disasters. As I stated in my testimony to this subcommittee on applying risk management principles to guide federal investments, risk management should be viewed strategically, that is, with a view that goes beyond assessing what the risks are, to the integration of risk into annual budget and program review cycles. Balance Needed between Quick Provision of Assistance and Ensuring Accountability to Protect against Waste, Fraud, and Abuse Controls and accountability mechanisms help to ensure that resources are used appropriately. Nevertheless, during a catastrophic disaster, decision makers struggle with the tension between implementing controls and accountability mechanisms and the demand for rapid response and recovery assistance. On one hand, our work uncovered many examples where quick action could not occur due to procedures that required extensive, time-consuming processes, delaying the delivery of vital supplies and other assistance. On the other hand, we also found examples where FEMA’s processes assisting disaster victims left the federal government vulnerable to fraud and the abuse of expedited assistance payments. We estimated that through February 2006, FEMA made about $600 million to $1.4 billion in improper and potentially fraudulent payments to applicants who used invalid information to apply for expedited cash assistance. DHS and FEMA have reported a number of actions that are to be in effect for the 2007 hurricane season so that federal recovery programs will have more capacity to rapidly handle a catastrophic incident but also provide accountability. Examples include significantly increasing the quantity of prepositioned supplies, such as food, ice, and water; placing global positioning systems on supply trucks to track their location and better manage the delivery of supplies; creating an enhanced phone system for victim assistance applications that can handle up to 200,000 calls per day; and improving computer systems and processes for verifying the eligibility of those applying for assistance. Effective implementation of these and other planned improvements will be critical to achieving their intended outcomes. Finally, catastrophic disasters not only require a different magnitude of capabilities and resources for effective response, they may also require more flexible policies and operating procedures. In a catastrophe, streamlining, simplifying, and expediting decision making should quickly replace “business as usual” and unquestioned adherence to long-standing policies and operating procedures used in normal situations for providing relief to disaster victims. At the same time, controls and accountability mechanisms must be sufficient to provide the documentation needed for expense reimbursement and reasonable assurance that resources have been used legally and for the purposes intended. The federal government also will be a major partner in the longer-term recovery and rebuilding of communities along the Gulf Coast. Among the areas requiring federal attention are (1) assessing the environmental hazards created by the storms; (2) rebuilding and strengthening the levees; (3) providing assistance to school districts that have enrolled large numbers of evacuee children; and (4) building the capacity to address demand in multiple victims assistance programs such as financial assistance or loans for repair and replacement of housing and the rebuilding of businesses. GAO Recommendations Stress Changes in Leadership, Capabilities, and Accountability In line with a recommendation we made following Hurricane Andrew, the nation's most destructive hurricane prior to Katrina, we recommended that Congress give federal agencies explicit authority to take actions to prepare for all types of catastrophic disasters when there is warning. We also recommended that DHS (1) rigorously retest, train, and exercise its recent clarification of the roles, responsibilities, and lines of authority for all levels of leadership, implementing changes needed to remedy identified coordination problems; (2) direct that the NRP base plan and its supporting Catastrophic Incident Annex be supported by more robust and detailed operational implementation plans; (3) provide guidance and direction for federal, state, and local planning, training, and exercises to ensure such activities fully support preparedness, response, and recovery responsibilities at a jurisdictional and regional basis; (4) take a lead in monitoring federal agencies’ efforts to prepare to meet their responsibilities under the NRP and the interim National Preparedness Goal; and (5) use a risk management approach in deciding whether and how to invest finite resources in specific capabilities for a catastrophic disaster. As I mentioned earlier, DHS has made revisions to the NRP and released the final Supplement to the Catastrophic Incident Annex—both designed to further clarify federal roles and responsibilities and relationships among federal, state and local governments and responders. However, these revisions have not been tested in a major disaster. FEMA and DHS have also announced a number of actions intended to improve readiness and response based on our work and the work of congressional committees and the Administration. DHS is also currently reorganizing FEMA as required by the Post-Katrina Reform Act. However, there is little information available on the extent to which these changes are operational and they also have not yet been tested in a major disaster. Originally, in its desire to provide assistance quickly following Hurricane Katrina, DHS was unable to keep up with the magnitude of needs to confirm the eligibility of victims for disaster assistance, or ensure that there were provisions in contracts for response and recovery services to ensure fair and reasonable prices in all cases. We recommended that DHS create accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. We also recommended that DHS provide guidance on advance procurement practices (precontracting) and procedures for those federal agencies with roles and responsibilities under the NRP. These federal agencies could then better manage disaster-related procurement and establish an assessment process to monitor agencies’ continuous planning efforts for their disaster-related procurement needs and the maintenance of capabilities. For example, we identified a number of emergency response practices in the public and private sectors that provide insight into how the federal government can better manage its disaster-related procurements. These practices include developing knowledge of contractor capabilities and prices, and establishing vendor relationships prior to the disaster and establishing a scalable operations plan to adjust the level of capacity to match the response with the need. Post-Katrina Reform Act Changes The Post-Katrina Reform Act responded to the findings and recommendations in the various reports examining the preparation for and response to Hurricane Katrina. Most of the Act's provisions become effective as of March 31, 2007, while others became effective upon the Act's enactment on October 4, 2006. While keeping FEMA within DHS, the act enhances FEMA's responsibilities and its autonomy within DHS. Under the act, for example, FEMA’s mission is to reduce the loss of life and property and protect the nation from all hazards, including natural disasters, acts of terrorism, and other man-made disasters. To accomplish this mission, FEMA is to lead and support the nation in a risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. Under the Act, the FEMA Administrator reports directly to the Secretary of DHS; FEMA is now a distinct entity within DHS; and the Secretary of DHS can no longer substantially or significantly reduce the authorities, responsibilities, or functions of FEMA or the capability to perform them unless authorized by subsequent legislation. FEMA will absorb the functions of DHS’s Preparedness Directorate (with some exceptions). The statute establishes 10 regional offices with specified responsibilities. The statute also establishes a National Integration Center responsible for the ongoing management and maintenance of the NIMS and NRP. The Post-Katrina Reform Act also includes provisions for other areas, such as evacuation plans and exercises and addressing the needs of individuals with disabilities, In addition, the act includes several provisions to strengthen the management and capability of FEMA’s workforce. For example, the statute calls for a strategic human capital plan to shape and improve FEMA’s workforce, authorizes recruitment and retention bonuses, and establishes a Surge Capacity Force. Most of the organizational changes become effective as of March 31, 2007. Others, such as the increase in organizational autonomy for FEMA and establishment of the National Integration Center, became effective upon enactment of the Post-Katrina Reform Act on October 4, 2006. DHS Reports Planned Changes Consistent with the Legislation On January 18, 2007, DHS provided Congress a notice of implementation of the Post-Katrina Reform Act reorganization requirements and additional organizational changes made under the Homeland Security Act of 2002. All of the changes, according to DHS, will become effective on March 31, 2007. According to DHS, the department completed a thorough assessment of FEMA’s internal structure to incorporate lessons learned from Hurricane Katrina and integrate systematically new and existing assets and responsibilities within FEMA. The department’s core structural conclusions are described in the letter. DHS will transfer the following DHS offices and divisions to FEMA: United States Fire Administration, Office of Grants and Training, Chemical Stockpile Emergency Preparedness Division, Radiological Emergency Preparedness Program, Office of National Capital Region Coordination, and, Office of State and Local Government Coordination. DHS officials say that they will carefully manage all financial, organizational, and personnel actions necessary to transfer these organizations by March 31, 2007. They also said they will establish several other organizational elements, such as a logistics management division, a disaster assistance division, and a disaster operations division. In addition, FEMA will expand its regional office structure with each region in part by establishing a Regional Advisory Council and at least one Regional Strike Team. With the recent appointment of the director for region III, FEMA officials noted that for the first time in recent memory there will be no acting regional directors and all 10 FEMA regional offices will be headed by experienced professionals, according to FEMA officials. Further, FEMA will include a new national preparedness directorate intended to consolidate FEMA’s strategic preparedness assets from existing FEMA programs and certain legacy Preparedness Directorate programs. The National Preparedness Directorate will contain functions related to preparedness doctrine, policy, and contingency planning. It also will include DHS’s exercise coordination and evaluation program, emergency management training, and hazard mitigation associated with the chemical stockpile and radiological emergency preparedness programs. Effective Implementation of the Post-Katrina Reform Act’s Provisions Should Respond to Many Concerns Effective implementation of the Post-Katrina Reform Act’s organizational changes and related roles and responsibilities, in addition to those changes already undertaken by DHS, should address many of our emergency management observations and recommendations. As noted earlier, our analysis in the aftermath of Hurricane Katrina showed the need for (1) clearly defined and understood leadership roles and responsibilities; (2) the development of the necessary disaster capabilities; and (3) accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. The statute appears to strengthen leadership roles and responsibilities. For example, the statute clarifies that the FEMA Administrator is to act as the principal emergency management adviser to the President, the Homeland Security Council, and the Secretary of DHS and to provide recommendations directly to Congress after informing the Secretary of DHS. The incident management responsibilities and roles of the National Integration Center are now clear. The Secretary of DHS must ensure that the NRP provides for a clear chain of command to lead and coordinate the federal response to any natural disaster, act of terrorism, or other man- made disaster. The law also establishes qualifications that appointees must meet. For example, the FEMA Administrator must have a demonstrated ability in and knowledge of emergency management and homeland security and 5 years of executive leadership and management experience. Many provisions are designed to enhance preparedness and response. For example, the statute requires the President to establish a national preparedness goal and national preparedness system. The national preparedness system includes a broad range of preparedness activities, including utilizing target capabilities and preparedness priorities, training and exercises, comprehensive assessment systems, and reporting requirements. To illustrate, the FEMA Administrator is to carry out a national training program to implement, and a national exercise program to test and evaluate the National Preparedness Goal, NIMS, NRP, and other related plans and strategies. In addition, FEMA is to partner with nonfederal entities to build a national emergency management system. States must develop plans that include catastrophic incident annexes modeled after the NRP annex in order to be eligible for FEMA emergency preparedness grants. The state annexes must be developed in consultation with local officials, including regional commissions. FEMA regional administrators are to foster the development of mutual aid agreements between states. FEMA must enter into a memorandum of understanding with certain non-federal entities to collaborate on developing standards for deployment capabilities, including credentialing of personnel and typing of resources, must be developed. In addition, FEMA must implement several other capabilities, such as (1) developing a logistics system providing real-time visibility of items at each point throughout the logistics system, (2) establishing a prepositioned equipment program, and (3) establishing emergency support and response teams. FEMA Taking Steps to Address Logistics Problems In the wake of Hurricane Katrina, FEMA’s performance in the logistics area came under harsh criticism; within days, FEMA became overwhelmed and essentially asked the military to take over much of the logistics mission. In the Post-Katrina Reform Act, Congress required FEMA to make its logistics system more flexible and responsive. Since the legislation, FEMA has been working to address its provisions, but it is too early to evaluate these efforts. We recently examined FEMA logistics issues, taking a broad approach, identifying five areas necessary for an effective logistics system. Below, we describe these five areas along with FEMA’s ongoing actions to address each. Requirements: FEMA does not yet have operational plans in place to address disaster scenarios, nor does it have detailed information on states’ capabilities and resources. As a result, FEMA does not have information from these sources to define what and how much it needs to stock. However, FEMA is developing a concept of operations to underpin its logistics program and told us that it is working to develop detailed plans and the associated stockage requirements. However, until FEMA has solid requirements based on detailed plans, the agency will be unable to assess its true preparedness. Inventory management: FEMA’s system accounts for the location, quantity, and types of supplies, but the ability to track supplies in- transit is limited. FEMA has several efforts under way to improve transportation and tracking of supplies and equipment, such as expanding its new system for in-transit visibility from the two test regions to all FEMA regions. Facilities: FEMA maintains nine logistics centers and dozens of smaller storage facilities across the country. However, it has little assurance that these are the right number of facilities located in the right places. FEMA officials told us they are in the process of determining the number of storage facilities it needs and where they should be located. Distribution: Problems persist with FEMA’s distribution system, including poor transportation planning, unreliable contractors, and lack of distribution sites. FEMA officials described initiatives under way that should mitigate some of the problems with contractors, and has been working with Department of Defense and Department of Transportation to improve the access to transportation when needed. People: Human capital issues are pervasive in FEMA, including the logistics area. The agency has a small core of permanent staff, supplemented with contract and temporary disaster assistance staff. However, FEMA’s recent retirements and losses of staff, and its difficulty in hiring permanent staff and contractors, have created staffing shortfalls and a lack of capability. According to a January 2007 study commissioned by FEMA, there are significant shortfalls in staffing and skill sets of full-time employees, particularly in the planning, advanced contracting, and relationship management skills needed to fulfill the disaster logistics mission. FEMA has recently hired a logistics coordinator and is making a concerted effort to hire qualified staff for the entire agency, including logistics. In short, FEMA is taking many actions to transition its logistics program to be more proactive, flexible, and responsive. While these and other initiatives hold promise for improving FEMA’s logistics capabilities, it will be years before they are fully implemented and operational. Post-Katrina Reform Act Provisions Also Respond to Accountability Issues Statutory changes establish more controls and accountability mechanisms. For example, the Post-Katrina Reform Act requires FEMA to develop and implement a contracting system that maximizes the use of advance contracting to the extent practical and cost-effective. The Secretary of DHS is required to promulgate regulations designed to limit the excessive use of subcontractors and subcontracting tiers. The Secretary of DHS is also required to promulgate regulations that limit certain noncompetitive contracts to 150 days, unless exceptional circumstances apply. Oversight funding is specified. FEMA may dedicate up to one percent of funding for agency mission assignments as oversight funds. The FEMA Administrator must develop and maintain internal management controls of FEMA disaster assistance programs and develop and implement a training program to prevent fraud, waste, and abuse of federal funds in response to or recovery from a disaster. Verification measures must be developed to identify eligible recipients of disaster relief assistance. Several Disaster Management Issues Should Have Continued Congressional Attention In November 2006, the Comptroller General wrote to the congressional leadership suggesting areas for congressional oversight. He suggested that one area needing fundamental reform and oversight was preparing for, responding to, recovering from, and rebuilding after catastrophic events. Recent events—notably Hurricane Katrina and the threat of an influenza pandemic—have illustrated the importance of ensuring a strategic and integrated approach to catastrophic disaster management. Disaster preparation and response that is well planned and coordinated can save lives and mitigate damage, and an effectively functioning insurance market can substantially reduce the government’s exposure to post-catastrophe payouts. Lessons learned from past national emergencies provide an opportunity for Congress to look at actions that could mitigate the effects of potential catastrophic events. Similarly, the Comptroller General suggested that Congress could also consider how the federal government can work with other nations, other levels of government, and nonprofit and private sector organizations, such as the Red Cross and private insurers, to help ensure the nation is well prepared and recovers effectively. Given the billions of dollars dedicated to preparing for, responding to, recovering from, and rebuilding after catastrophic disasters, congressional oversight is critical. A comprehensive and in-depth oversight agenda would require long-term efforts. Congress might consider starting with several specific areas for immediate oversight, such as (1) evaluating development and implementation of the National Preparedness System, including preparedness for an influenza pandemic, (2) assessing state and local capabilities and the use of federal grants in building and sustaining those capabilities, (3) examining regional and multi-state planning and preparation, (4) determining the status of preparedness exercises, and (5) examining DHS polices regarding oversight assistance. The National Preparedness System Is Key to Developing Disaster Capabilities More immediate congressional attention might focus on evaluating the construction and effectiveness of the National Preparedness System, which is mandated under the Post-Katrina Reform Act. Under Homeland Security Presidential Directive-8, issued in December 2003, DHS was to coordinate the development of a national domestic all-hazards preparedness goal “to establish measurable readiness priorities and targets that appropriately balance the potential threat and magnitude of terrorist attacks and large scale natural or accidental disasters with the resources required to prevent, respond to, and recover from them.” The goal was also to include readiness metrics and standards for preparedness assessments and strategies and a system for assessing the nation’s overall preparedness to respond to major events. To implement the directive, DHS developed the National Preparedness Goal using 15 emergency event scenarios, 12 of which were terrorist related, with the remaining 3 addressing a major hurricane, major earthquake, and an influenza pandemic. According to DHS’s National Preparedness Guidance, the planning scenarios are intended to illustrate the scope and magnitude of large-scale, catastrophic emergency events for which the nation needs to be prepared and to form the basis for identifying the capabilities needed to respond to a wide range of large scale emergency events. The scenarios focused on the consequences that first responders would have to address. Some state and local officials and experts have questioned whether the scenarios were appropriate inputs for preparedness planning, particularly in terms of their plausibility and the emphasis on terrorist scenarios. Using the scenarios, and in consultation with federal, state, and local emergency response stakeholders, DHS developed a list of over 1,600 discrete tasks, of which 300 were identified as critical. DHS then identified 36 target capabilities to provide guidance to federal, state, and local first responders on the capabilities they need to develop and maintain. That list has since been refined, and DHS released a revised draft list of 37 capabilities in December 2005. Because no single jurisdiction or agency would be expected to perform every task, possession of a target capability could involve enhancing and maintaining local resources, ensuring access to regional and federal resources, or some combination of the two. However, DHS is still in the process of developing goals, requirements, and metrics for these capabilities and the National Preparedness Goal in light of the Hurricane Katrina experience. Several key components of the National Preparedness System defined in the Post-Katrina Reform Act—the National Preparedness Goal, target capabilities and preparedness priorities, and comprehensive assessment systems—should be closely examined. Prior to Hurricane Katrina, DHS had established seven priorities for enhancing national first responder preparedness, including, for example, implementing the NRP and NIMS; strengthening capabilities in information sharing and collaboration; and strengthening capabilities in medical surge and mass prophylaxis. Those seven priorities were incorporated into DHS’s fiscal year 2006 homeland security grant program (HSGP) guidance, which added an eighth priority that emphasized emergency operations and catastrophic planning. In the fiscal year 2007 HSGP program guidance, DHS set two overarching priorities. DHS has focused the bulk of its available grant dollars on risk- based investment. In addition, the department has prioritized regional coordination and investment strategies that institutionalize regional security strategy integration. In addition to the two overarching priorities, the guidance also identified several others. These include (1) measuring progress in achieving the National Preparedness Goal, (2) integrating and synchronizing preparedness programs and activities, (3) developing and sustaining a statewide critical infrastructure/key resource protection program, (4) enabling information/intelligence fusion, (5) enhancing statewide communications interoperability, (6) strengthening preventative radiological/nuclear detection capabilities, and (7) enhancing catastrophic planning to address nationwide plan review results. Under the guidance, all fiscal year 2007 HSGP applicants will be required to submit an investment justification that provides background information, strategic objectives and priorities addressed, their funding/implementation plan, and the impact that each proposed investment (project) is anticipated to have. The Particular Challenge of Preparing for an Influenza Pandemic The possibility of an influenza pandemic is a real and significant threat to the nation. There is widespread agreement that it is not a question of if but when such a pandemic will occur. The issues associated with the preparation for and response to a pandemic flu are similar to those for any other type of disaster: clear leadership roles and responsibilities, authority, and coordination; risk management; realistic planning, training, and exercises; assessing and building the capacity needed to effectively respond and recover; effective information sharing and communication; and accountability for the effective use of resources. However, a pandemic poses some unique challenges. Hurricanes, earthquakes, explosions, or bioterrorist incidents occur within a short period of time, perhaps a period of minutes, although such events can have long-term effects, as we have seen in the Gulf region following Hurricane Katrina. The immediate effects of such disasters are likely to affect specific locations or areas within the nation; the immediate damage is not nationwide. In contrast, an influenza pandemic is likely to continue in waves of 6 to 8 weeks for a number of weeks or months and affect wide areas of the nation, perhaps the entire nation. Depending upon the severity of the pandemic, the number of deaths could be from 200,000 to 2 million. Seasonal influenza in the United States results in about 36,000 deaths annually. Successfully addressing the pandemic is also likely to require international coordination of detection and response. The Department of Health and Human Services estimates that during a severe pandemic, absenteeism may reach as much as 40 percent in an affected community because individuals are ill, caring for family members, or fear infection. Such absenteeism could affect our nation’s economy, as businesses and governments face the challenge of continuing to provide essential services with reduced numbers of healthy workers. In addition, our nation’s ability to respond effectively to hurricanes or other major disasters during a pandemic may also be diminished as first responders, health care workers, and others are infected or otherwise unable to perform their normal duties. Thus, the consequences of a pandemic are potentially widespread and effective planning and response for such a disaster will require particularly close cooperation among all levels of government, the private sector, individuals within the United States, as well as international cooperation. We have engagements under way examining such issues as barriers to implementing the Department of Health and Human Services’ National Pandemic Influenza Plan, the national strategy and framework for pandemic influenza, the Department of Defense and Department of Agriculture’s preparedness efforts and plans, public health and hospital preparedness, and U.S. efforts to improve global disease surveillance. We expect most of these reports to be issued by late summer 2007. Our Knowledge of State and Local Efforts to Improve Their Capabilities Is Limited Possible congressional oversight in the short term also might focus on state and local capabilities. As I testified before this subcommittee last month on applying risk management principles to guide federal investments, over the past 4 years DHS has provided about $14 billion in federal funding to states, localities, and territories through its HSGP grants. Remarkably, however, we know little about how states and localities finance their efforts in this area, have used their federal funds, and are assessing the effectiveness with which they spend those funds. Essentially, all levels of government are still struggling to define and act on the answers to basic, but hardly simple, questions about emergency preparedness and response: What is important (that is, what are our priorities)? How do we know what is important (e.g., risk assessments, performance standards)? How do we measure, attain, and sustain success? On what basis do we make necessary trade-offs, given finite resources? There are no simple, easy answers to these questions. The data available for answering them are incomplete and imperfect. We have better information and a better sense of what needs to be done for some types of major emergency events than for others. For some natural disasters, such as regional wildfires and flooding, there is more experience and therefore a better basis on which to assess preparation and response efforts and identify gaps that need to be addressed. California has experience with earthquakes; Florida, with hurricanes. However, no one in the nation has experience with such potential catastrophes as a dirty bomb detonated in a major city. Although both the AIDS epidemic and SARS provide some related experience, there have been no recent pandemics that rapidly spread to thousands of people across the nation. A new feature in the fiscal year 2006 DHS homeland security grant guidance for the Urban Area Security Initiative (UASI) grants was that eligible recipients must provide an “investment justification” with their grant application. States were to use this justification to outline the implementation approaches for specific investments that will be used to achieve the initiatives outlined in their state Program and Capability Enhancement Plan. These plans were multiyear global program management plans for the entire state homeland security program that look beyond federal homeland security grant programs and funding. The justifications must justify all funding requested through the DHS homeland security grant program. In the guidance DHS noted that it would use a peer review process to evaluate grant applications on the basis of the effectiveness of a state’s plan to address the priorities it has outlined and thereby reduce its overall risk. For fiscal year 2006, DHS implemented a competitive process to evaluate the anticipated effectiveness of proposed homeland security investments. For fiscal year 2007, DHS will continue to use the risk and effectiveness assessments to inform final funding decisions, although changes have been made to make the grant allocation process more transparent and more easily understood. DHS officials have said that they cannot yet assess how effective the actual investments from grant funds are in enhancing preparedness and mitigating risk because they do not yet have the metrics to do so. Regional and Multistate Planning and Preparation Should Be Robust Through its grant guidance, DHS has encouraged regional and multistate planning and preparation. Planning and assistance have largely been focused on single jurisdictions and their immediately adjacent neighbors. However, well-documented problems with the abilities of first responders from multiple jurisdictions to communicate at the site of an incident and the potential for large-scale natural and terrorist disasters have generated a debate on the extent to which first responders should be focusing their planning and preparation on a regional and multigovernmental basis. As I mentioned earlier, an overarching national priority for the National Preparedness Goal is embracing regional approaches to building, sustaining, and sharing capabilities at all levels of government. All HSGP applications are to reflect regional coordination and show an investment strategy that institutionalizes regional security strategy integration. However, it is not known to what extent regional and multi-state planning has progressed and is effective. Our limited regional work indicated there are challenges in planning. Our early work addressing the Office of National Capital Region Coordination (ONCRC) and National Capital Region (NCR) strategic planning reported that the ONCRC and the NCR faced interrelated challenges in managing federal funds in a way that maximizes the increase in first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. One of these challenges included a coordinated regionwide plan for establishing first responder performance goals, needs, and priorities, and assessing the benefits of expenditures in enhancing first responder capabilities. In subsequent work on National Capital Region strategic planning, we highlighted areas that needed strengthening in the Region’s planning, specifically improving the substance of the strategic plan to guide decision makers. For example, additional information could have been provided regarding the type, nature, scope, or timing of planned goals, objectives, and initiatives; performance expectations and measures; designation of priority initiatives to meet regional risk and needed capabilities; lead organizations for initiative implementation; resources and investments; and operational commitment. Exercises Must Be Carefully Planned and Deployed and Capture Lessons Learned Our work examining the preparation for and response to Hurricane Katrina highlighted the importance of realistic exercises to test and refine assumptions, capabilities, and operational procedures; build on the strengths; and shore up the limitations revealed by objective assessments of the exercises. The Post-Katrina Reform Act mandates a national exercise program, and training and exercises are also included as a component of the National Preparedness System. With almost any skill and capability, experience and practice enhance proficiency. For first responders, exercises—especially of the type or magnitude of events for which there is little actual experience—are essential for developing skills and identifying what works well and what needs further improvement. Major emergency incidents, particularly catastrophic ones, by definition require the coordinated actions of personnel from many first responder disciplines and all levels of government, nonprofit organizations, and the private sector. It is difficult to overemphasize the importance of effective interdisciplinary, intergovernmental planning, training, and exercises in developing the coordination and skills needed for effective response. For exercises to be effective in identifying both strengths and areas needing attention, it is important that they be realistic, designed to test and stress the system, involve all key persons who would be involved in responding to an actual event, and be followed by honest and realistic assessments that result in action plans that are implemented. In addition to relevant first responders, exercise participants should include, depending upon the scope and nature of the exercise, mayors, governors, and state and local emergency managers who would be responsible for such things as determining if and when to declare a mandatory evacuation or ask for federal assistance. DHS Has Provided Limited Transparency for Its Management or Operational Decisions Congressional oversight in the short term might include DHS’s policies regarding oversight assistance. The Comptroller General has testified that DHS has not been transparent in its efforts to strengthen its management areas and mission functions. While much of its sensitive work needs to be guarded from improper disclosure, DHS has not been receptive toward oversight. Delays in providing Congress and us with access to various documents and officials have impeded our work. We need to be able to independently assure ourselves and Congress that DHS has implemented many of our past recommendations or has taken other corrective actions to address the challenges we identified. However, DHS has not made its management or operational decisions transparent enough so that Congress can be sure it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually, and is providing the levels of security called for in numerous legislative requirements and presidential directives. Concluding Observations Since September 11, 2001, the federal government has awarded billions of dollars in grants and assistance to state and local governments to assist in strengthening emergency management capabilities. DHS has developed several key policy documents, including the NRP, NIMS, and the National Preparedness Goal to guide federal, state, and local efforts. The aftermath of the 2005 hurricane season resulted in a reassessment of the federal role in preparing for and responding to catastrophic events. The studies and reports of the past year—by Congress, the White House Homeland Security Council, the DHS IG, DHS and FEMA, GAO, and others—have provided a number of insights into the strengths and limitations of the nation’s capacity to respond to catastrophic disasters and resulted in a number of recommendations for strengthening that capacity. Collectively, these studies and reports paint a complex mosaic of the challenges that the nation—federal, state, local, and tribal governments; nongovernmental entities; the private sector; and individual citizens—faces in preparing for, responding to, and recovering from catastrophic disasters. The Post- Katrina Reform Act directs many organizational, mission, and policy changes to respond to these findings and challenges. Assessing, developing, attaining, and sustaining needed emergency preparedness, response, and recovery capabilities is a difficult task that requires sustained leadership, the coordinated efforts of many stakeholders from a variety of first responder disciplines, levels of government, and nongovernmental entities. There is a no “silver bullet,” no easy formula. It is also a task that is never done, but requires continuing commitment and leadership and trade-offs because circumstances change and we will never have the funds to do everything we might like to do. That concludes my statement, and I would be pleased to respond to any questions you and subcommittee members may have. Contacts and Staff Acknowledgments For further information about this statement, please contact William O. Jenkins Jr., Director, Homeland Security and Justice Issues, on (202) 512-8777 or jenkinswo@gao.gov. In addition to the contact named above the following individuals from GAO’s Homeland Security and Justice Team also made major contributors to this testimony: Sharon Caudle, Assistant Director; John Vocino, Analyst- in-Charge; and Richard Ascarate, Communications Analyst. The following individuals from GAO’s Defense Capabilities and Management Team also made major contributors to this testimony: John Pendelton, Director; Ann Borseth, Assistant Director. Appendix I: Related GAO Products Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. Washington, D.C.: February 28, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-452T. Washington, D.C.: February 7, 2007. Hurricanes Katrina and Rita Disaster Relief: Prevention Is the Key to Minimizing Fraud, Waste, and Abuse in Recovery Efforts. GAO-07-418T. Washington, D.C.: January 29, 2007 Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations. GAO-07-139. Washington, D.C.: January 19, 2007. Transportation-Disadvantaged Populations: Actions Needed to Clarify Responsibilities and Increase Preparedness for Evacuations. GAO-07-44. Washington, D.C.: December 22, 2006. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006. Hurricanes Katrina and Rita: Continued Findings of Fraud, Waste, and Abuse. GAO-07-252T. Washington, D.C.: December 6, 2006. Hurricanes Katrina and Rita: Unprecedented Challenges Exposed the Individuals and Households Program to Fraud and Abuse; Actions Needed to Reduce Such Problems in Future. GAO-06-1013. Washington, D.C.: September 27, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes. GAO-06-834. Washington, D.C.: September 6, 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. Washington, D.C.: June 8, 2006. Federal Emergency Management Agency: Factors for Future Success and Issues to Consider for Organizational Placement. GAO-06-746T. Washington, D.C.: May 9, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Continuity of Operations: Agency Plans Have Improved, but Better Oversight Could Assist Agencies in Preparing for Emergencies. GAO-05-577. Washington, D.C.: April 28, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Post-Katrina Emergency Management Reform Act of 2006 stipulates major changes to the Federal Emergency Management Agency (FEMA) within the Department of Homeland Security (DHS) to improve the agency's preparedness for and response to catastrophic disasters. For example, the act establishes a new mission for and new leadership positions within FEMA. As GAO has reported, DHS faces continued challenges, including clearly defining leadership roles and responsibilities, developing necessary disaster response capabilities, and establishing accountability systems to provide effective response while also protecting against waste, fraud, and abuse. This testimony discusses the extent to which DHS has taken steps to overcome these challenges This testimony summarizes earlier GAO work on: (1) leadership, response capabilities, and accountability controls; (2) organizational changes provided for in the Post-Katrina Reform Act; and (3) disaster management issues for continued Congressional attention. GAO reported in the aftermath of Hurricane Katrina that DHS needs to more effectively coordinate disaster preparedness, response, and recovery efforts. GAO analysis showed improvements were needed in leadership roles and responsibilities, development of necessary disaster capabilities, and accountability systems that balance the need for fast, flexible response with the need to prevent waste, fraud, and abuse. To facilitate rapid and effective decision making, legal authorities, roles and responsibilities, and lines of authority at all government levels must be clearly defined, effectively communicated, and well understood. Improved capabilities were needed for catastrophic disasters--particularly in the areas of (1) situational assessment and awareness; (2) emergency communications; (3) evacuations; (4) search and rescue; (5) logistics; and (6) mass care and sheltering. Effectively implementing the provisions of the Post-Katrina Reform Act will address many of these issues, and FEMA has initiated reviews and some actions in each of these areas. But their operational impact in a major disaster has not yet been tested. As a result of its body of work, GAO's recommendations included that DHS (1) rigorously re-test, train, and exercise its recent clarification of the roles, responsibilities, and lines of authority for all levels of leadership; (2) direct that more robust and detailed operational implementation plans support the National Response Plan (NRP); (3) provide guidance and direction for all planning, training, and exercises to ensure such activities fully support preparedness, response, and recovery responsibilities at a jurisdictional and regional basis; (4) take a lead in monitoring federal agencies' efforts to prepare to meet their responsibilities under the NRP and the interim National Preparedness Goal; and (5) use a risk management approach in making its investment decisions. We also recommended that Congress give federal agencies explicit authority to take action to prepare for all types of catastrophic disasters when there is warning. In his oversight letter to Congress, the Comptroller General suggested that one area needing fundamental reform and oversight is ensuring a strategic and integrated approach to prepare for, respond to, recover, and rebuild from catastrophic events. Congress may wish to consider several specific areas for immediate oversight. These include (1) evaluating development and implementation of the National Preparedness System, including preparedness for an influenza pandemic; (2) assessing state and local capabilities and the use of federal grants to enhance those capabilities; (3) examining regional and multi-state planning and preparation; (4) determining the status of preparedness exercises; and (5) examining DHS polices regarding oversight assistance.
Background The primary mission of NWS is to help protect life and property by providing weather and flood warnings, public forecasts, and advisories for all of the United States, adjacent waters, and ocean areas. NWS operations also support other agencies’ missions and the nation’s commercial interests. For example, NWS provides weather forecasts and warnings to support aviation and marine safety. To fulfill its mission, NWS uses a variety of systems and manual processes in collecting, processing, and disseminating weather data to and among its network of field offices and regional and national centers. Many of these systems and processes are outdated. NWS began a nationwide modernization program in the 1980s to upgrade observing systems, such as satellites and radars, and design and develop advanced computer workstations for weather forecasters. The goals of the modernization are to achieve more uniform weather services across the nation, improve forecasts, provide better detection and prediction of severe weather and flooding, permit more cost-effective operations through staff and office reductions, and achieve higher productivity. In conjunction with its modernization, NWS plans to restructure its field offices, reducing their number from 256 to 119. However, delays in implementing the Advanced Weather Interactive Processing System have slowed progress in office restructuring. NWS now expects the modernization to be completed before the end of fiscal year 1999. In the past, we and others have identified several weaknesses in NWS’ actions to modernize its operations and manage its information technology resources. In February 1995, we designated the NWS modernization as an area of high risk due to its cost, complexity, past problems, and criticality to the NWS mission. Because of continuing concerns over cost increases and schedule delays associated with the modernization, it remains in the high-risk category today. NOAA has also been plagued by financial management problems. For example, Commerce’s Office of the Inspector General noted in February 1997 that NOAA could not provide all the financial information required by the Chief Financial Officers Act of 1990, as expanded by the Government Management Reform Act of 1994, or ensure the accuracy of certain components of financial information in its consolidated financial statements. The independent certified public accounting firm auditing NOAA’s fiscal year 1996 financial statements was unable to express an opinion on these statements due to inadequacies in NOAA’s accounting records and internal controls. The auditors identified 11 material weaknesses in NOAA’s internal controls, including budgetary execution transactions that were not supported or reconciled. The auditors recommended that NOAA focus on communicating the importance of financial management and fiscal responsibility to its program offices. Scope and Methodology To describe key events related to the formulation and execution of NWS’ fiscal year 1997 budget, we examined guidance developed by OMB, including the communication of budget requests and apportionments, as well as congressional guidance provided in appropriation acts and accompanying congressional committee reports on the execution of NWS’ budget. We also obtained and reviewed relevant budget documents for NWS, including its budget request and its requests for the reprogramming of funds. To identify key events regarding NWS’ fiscal year 1997 budget “shortfall” and efforts to address it, we examined records of communication among NWS, NOAA, and Commerce, as well as communications to the Congress. This included memoranda, briefings, electronic messages, and statements to the Congress. We interviewed Commerce, NOAA, and NWS budget and program officials to identify key events related to the NWS budget for fiscal year 1997. We also examined relevant reports, such as An Assessment of the Fiscal Requirements to Operate the Modernized National Weather Service During Fiscal Years 1998 and 1999. We also reviewed the audit report prepared by the Department of Commerce’s Office of Inspector General and an independent certified public accounting firm on NOAA’s fiscal year 1996 financial statements, dated February 26, 1997. We performed our work at NOAA headquarters in Washington, D.C., and at NWS headquarters in Silver Spring, Maryland, from July 1997 through January 1998. Our work was performed in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce or his designee. The Secretary provided written comments, which are discussed in the “Agency Comments” section and are reprinted in appendix I. NOAA’s Budget Formulation and Execution Process In developing their internal budget preparation instructions and formulating budget requests, executive agencies follow guidance contained in OMB Circular A-11, Preparation and Submission of Budget Estimates. This circular provides detailed guidance on the form and content of budget requests and the basis for making budget estimates. At NOAA, this guidance is supplemented by a budget handbook that reinforces many of the circular’s principles. According to the NOAA handbook, the NOAA Comptroller with input from the NOAA components—the National Weather Service; National Ocean Service; National Marine Fisheries Service; Oceanic and Atmospheric Research; and National Environmental Satellite, Data, and Information Service—prepares a draft budget request for departmental review and approval. NOAA’s fiscal year 1997 budget request contained several appropriation accounts, including operations, research and facilities; construction; and fleet modernization, shipbuilding and conversion. The operations, research and facilities account, the largest appropriation, includes separate estimates for each of NOAA’s components and provides the majority of their funding. This account contains no-year funding authority—that is, its appropriation remains available for obligation for an indefinite period of time. There is no appropriation account specific to NWS. Table 1 provides a snapshot of the development of the NWS fiscal year 1997 budget. This process began when the department submitted a fiscal year 1997 budget request of nearly $2 billion to OMB for NOAA operations, research, and facilities, including a request of $693 million for NWS. OMB reviews each department and agency budget request based on presidential priorities and other factors and makes an initial proposal on how resources will be allocated. Departments and agencies are notified of OMB’s initial resource allocation decisions about 2 months after making their initial budget submissions. This notification is commonly referred to as a “passback.” Agencies may appeal their passbacks. The OMB passback to the Department of Commerce resulted in overall reductions for NOAA as well as a specific reduction of about $22 million for NWS’ portion of NOAA’s operations, research, and facilities request. As required by OMB Circular A-11, NOAA revised its budget request to bring it into accord with these decisions. Both OMB and NOAA budget preparation guidance instruct staff to support the ultimate request in the President’s budget. In response to the passback, NWS revised its fiscal year 1997 budget to about $671 million, the amount contained in the President’s budget request to the Congress. The final step in the budget formulation process is congressional review of the agency’s budget and the enactment of appropriations. After holding hearings and deliberating on NOAA’s budget request, the Congress reduced the amount for NOAA’s operations, research, and facilities request by about $117 million and specified that NWS absorb about $33 million of the cut, resulting in a fiscal year 1997 budget for NWS of $638 million. After an agency’s appropriation is enacted, the agency receives an apportionment from OMB. Apportionment and subsequent allotmentswithin the agency are established, often by calendar quarter and by program or major activity, to prevent obligations in a manner or at a pace that would result in the agency exceeding appropriated levels. As an agency executes its budget, it may feel changes from planned spending levels are needed. To allow for efficient and effective execution of agency budgets while still maintaining appropriate oversight of executive actions, the Congress has provided reprogramming guidelines that define when changes from planned spending levels require congressional notification. Reprogramming is the shifting of funds within an appropriation for purposes other than those contemplated at the time of appropriation. For example, to help address its budget “shortfall,” NWS requested a $7.9 million reprogramming of NOAA funds in April 1997. There are no governmentwide reprogramming guidelines; guidance varies among appropriation subcommittees. For example, the annual Commerce, Justice, State, the Judiciary and Related Agencies Appropriations Act typically provides that no funds are available for a reprogramming which results in, among other things, the creation of a new program; elimination of a program, project, or activity; or relocation of an office or employees unless the Appropriations Committees of both Houses of Congress are notified 15 days in advance of the reprogramming. Appendix II contains a chronology describing the key events related to the formulation and execution of the NWS fiscal year 1997 budget. Key Events Associated With NWS’ Budget “Shortfall” and Efforts to Address It Due to the reductions that OMB and, subsequently, the Congress made to NWS’ fiscal year 1997 budget request, as well as inflation and other cost increases, NWS believed it had a budget “shortfall.” NWS officials felt that NWS’ fiscal year 1997 budget was not sufficient to provide desired levels of services. NWS appeared to define its “shortfall” generally as the gap between its final 1997 budget (less a one-time program increase of $14 million for modernization and associated restructuring demonstration and implementation) and its 1996 budget adjusted for inflationary and other increased costs, such as pay raises. Varying Amounts of Budget “Shortfall” Were Reported to the Congress NOAA and NWS reported varying amounts to the Congress about the size of NWS’ estimated budget “shortfall.” For example, after receiving its fiscal year 1997 appropriations, NOAA and NWS officials noted spending reductions of $27.5 million in a February 14, 1997, briefing to congressional staff. Two months later, in an April 4, 1997, briefing to congressional staff, NOAA noted spending reductions of $42.2 million. By May 1997, the reported “shortfall” had increased to $47.4 million. Table 2 shows the varying estimates of “shortfall” reported to the Congress and the changing components associated with each estimate. In a May 15, 1997, hearing before the Subcommittee on Science, Space and Technology, Senate Committee on Commerce, Science and Transportation, some Members of Congress indicated that they were confused by the varying amounts of reported budget “shortfall.” According to NOAA and NWS officials, the varying amounts provided to the Congress responded to specific questions asked at particular points in time and did not necessarily include all known elements of the “shortfall.” For example, although a January 3, 1997, NWS memorandum to NOAA mentioned the withholding of inflationary increases and a March 18, 1997, NWS memorandum to NOAA mentioned the increased NWS contribution for NOAA-wide support services, these elements of the “shortfall” were not provided to the Congress in the February 1997 and April 1997 briefings, respectively. A Variety of Measures Were Planned to Address the “Shortfall” To address its budget “shortfall,” NWS planned a number of measures, both temporary and permanent. Table 3 describes a set of planned actions to address the “shortfall” reported in May 1997. NWS ultimately succeeded in staying within its fiscal year 1997 budget level by implementing a number of actions. According to a December 1997 NWS analysis, NWS reduced (1) contracts and services by $15.2 million, (2) rent, communications, and utilities by $9.4 million, (3) salaries and benefits by $4.4 million, (4) employee relocations by $3.6 million, and (5) travel by $3 million. It also deobligated prior year funds, primarily permanent change of station costs, by $6.6 million. In commenting on a draft of this report, the Department of Commerce stated that NWS did not fill field office operational vacancies as they occurred throughout fiscal year 1997. This was done to enable NWS to place employees in available positions, thereby mitigating the negative impact of a planned major reduction-in-force. It further stated, however, that the longer these operational vacancies remained unfilled, the more critical the need to fill the positions became. As we will discuss in the next section, NWS also received congressional concurrence on a reprogramming request to NOAA, giving NWS $5.4 million that enabled it to restore funding to maintain its operational equipment and fund some personnel separation costs. Reprogramming Request and Certification Process Contributed to Confusion on “Shortfall” Two key events to address the NWS budget “shortfall” appeared to cause confusion among department officials. The first event centers on an NWS reprogramming request to NOAA. According to an April 18, 1997, NWS schedule, NWS requested a reprogramming of $7.9 million to address its “shortfall.” This request was forwarded to the Department’s Acting Chief Financial Officer by the NOAA Under Secretary on May 29, 1997. Our review of e-mail correspondence between the Counselor to the NOAA Under Secretary and an NWS Budget Analyst showed that there were several inquiries made about the status of the reprogramming request between May 6, 1997 to June 6, 1997, but there was no official communication that the request had been approved. However, the NWS Deputy Assistant Administrator for Operations assumed that the reprogramming request would be approved by Commerce and that funds were available to hire personnel to fill vacancies in the field that had been deferred because of the “shortfall.” This official told NOAA officials on June 13, 1997, that NWS intended to start filling critical field vacancies. On June 16, 1997, the NOAA Deputy Under Secretary informed the NWS Deputy Assistant Administrator for Operations that NWS could not fill the vacancies because it had a “deficit on the books.” This was primarily because the reprogramming request had not yet been approved, which meant that funds NWS believed were available for new hires would need to be retained to pay for possible personnel separations. Indeed, the department’s Acting Chief Financial Officer did not notify the Congress of the reprogramming request until July 3, 1997, nearly 3 months after the original request. By this time, the request for $7.9 million had been revised to $5.4 million because the department, based on input from NWS, determined that less money was needed to address personnel reductions associated with the streamlining of NWS. The Subcommittee on the Departments of Commerce, Justice, and State, the Judiciary and Related Agencies, House Committee on Appropriations, informed the department on July 16, 1997, that it had no objections to the reprogramming proposal. The second event involves NWS’ effort to obtain certification approval from NOAA to consolidate, automate, and/or close weather service offices. On April 22, 1997, 4 days after NWS submitted the reprogramming request for $7.9 million, NWS forwarded 83 certification packages to the NOAA Under Secretary for approval. Section 706(b) of Public Law 102-567 specifies that the Secretary of Commerce must certify that consolidating, automating, and/or closing of field offices will not result in any degradation of service to the affected service areas. The NWS Deputy Assistant Administrator for Modernization told us that he briefed the Secretary of Commerce on this certification process on June 10, 1997. Upon learning that NWS would not be able to fill vacancies in the field because the aforementioned reprogramming request had not yet been approved, the NWS Deputy Assistant Administrator for Modernization recommended to the NOAA Deputy Under Secretary on June 20, 1997, that 27 of the 83 certification packages be held back. This was because the packages included about 27 affected areas whose responsible weather field offices contained vacancies. The NWS Deputy Assistant Administrator for Modernization stated that the vacancies, if left unfilled, would result in a degradation of services for the area served by that weather field office. The NWS Deputy Assistant Administrator for Modernization told us that he had assumed all along that these vacancies would be filled when the certification packages were forwarded to the NOAA Under Secretary for approval. Subsequent to the NWS Deputy Assistant Administrator for Modernization’s recommendation, the NOAA Under Secretary did not take action on the 83 certification packages sent to him in 1997. The NWS Deputy Assistant Administrator for Modernization told us that NWS plans to forward about 80 certification packages for consolidation, automation, and/or closure of offices in 1998. Additional Developments Relating to the “Shortfall” Only 5 days after the NWS Deputy Assistant Administrator for Modernization recommended that the 27 certification packages be held back, the NOAA Under Secretary announced on June 25, 1997, the reassignment of the Assistant Administrator for Weather Services. The NOAA Under Secretary claimed that NOAA had been receiving conflicting information from NWS on how it would provide essential weather services while recognizing that the public expects government agencies to reduce their costs. At the same time, the Under Secretary announced his intention to appoint an outside Special Advisor on Weather Services to conduct a rigorous evaluation of the NWS budget and operations. According to the Special Advisor’s subsequent October 14, 1997, report to the Secretary of Commerce, Commerce and NOAA financial management information systems, coupled with NWS’ complex budget structure, budget formulation/execution policies and management processes, limit visibility of operational and overhead costs and the traceability of these costs to products and services. The Secretary of Commerce later announced on October 23, 1997, that he plans to hire a chief financial officer for NWS to address the need for management and budget reforms. NOAA’s Chief of Audit and Internal Control told us that the department is currently reviewing the position description for the NWS Chief Financial Officer and intends to advertise for it soon. Lastly, the Special Advisor felt that the fiscal year 1998 proposed budget and the 1999 department submission to OMB contained inadequate base funding. He recommended to the Secretary that the fiscal year 1998 budget for NWS be $680 million to provide essential public services and complete modernization activities. The Congress enacted appropriations that provided about $668 million in November 1997. Agency Comments In commenting on a draft of this report, the Department of Commerce stated that the report accurately reflected the events surrounding the fiscal year 1997 budget and acknowledged that we had conducted thorough work in researching and documenting the complex events and issues included in the report. For additional clarity in the report, the department provided several technical comments and changes that have been incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the date of the report. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Commerce, Science, and Transportation; the Senate and House Committees on Appropriations; the Senate Committee on Governmental Affairs; and the House Committee on Government Reform and Oversight; and the Director, Office of Management and Budget. We will also send copies to the Secretary of Commerce and to the Administrator of the National Oceanic and Atmospheric Administration. Copies will also be made available to other parties upon request. Please contact me at (202) 512-6253, or by e-mail at willemssenj.aimd@gao.gov, if you have any questions concerning this report. Major contributors to this report are listed in appendix III. Comments From the Department of Commerce The following are GAO’s comments on the Department of Commerce’s letter dated February 20, 1998. GAO’s Comments 1. Report has been modified to reflect agency comments. 2. The Department of Commerce noted that the $27.5 million figure should include $5 million in personnel separation costs, which would bring the total to $32.5 million. We did not change the table because the $32.5 million figure is not contained in the February 1997 briefing. Chronology of Key Events Related to the Formulation and Execution of the NWS Fiscal Year 1997 Budget Department of Commerce’s fiscal year 1997 budget request submitted to OMB included $2 billion for NOAA’s operations, research, and facilities which included $693 million for NWS. OMB passback and subsequent discussions resulted in reductions of $16.8 and $22.3 million to NOAA and NWS budgets, respectively. President’s budget for fiscal year 1997 submitted to Congress included about $2 billion request for NOAA’s operations, research, and facilities. The President’s backup book included about $670.7 million for NWS—about $471.7 million for operations and research and $199 million for systems acquisition. It also showed that the request included increases in the operations, research and facilities over fiscal year 1996 for pay raise and within-grade step increases. House and Senate Appropriations Subcommittees on Commerce, Justice, and State, the Judiciary, and Related Agencies held hearings on Commerce’s budget request. In testimony, NOAA’s Under Secretary noted that NWS’ fiscal year 1997 request for operations and research funds to provide public weather and flood warnings and forecasts and applied research in support of the weather service modernization was reduced by $18.2 million from the base. The Omnibus Consolidated Appropriations Act for fiscal year 1997 appropriated $1.85 billion for NOAA operations, research and facilities. The Conference Report provided $638 million for NWS of which $460.8 million was for operations and research. The appropriations represented a reduction of $117.1 million for NOAA and $32.7 million for NWS. Of NWS’ reductions, $10.9 million was cut from NWS’ operations and research request and $21.8 million was cut from NWS’ systems request. Memorandum from NWS’ Director of Management and Budget Office to NOAA Chief Financial Officer mentioning a $36.3 million NWS base deficit for fiscal year 1997 and expressing deep concern over NOAA’s withholding of within grade adjustments of $1.9 million. Congressional briefing by NOAA and NWS officials, including the NOAA Deputy Under Secretary, the Assistant Administrator for Weather Services, and the NWS Deputy Assistant Administrator for Operations, noting that fiscal year 1997 Base Operations Budget showed reductions of $27.5 million based on the fiscal year 1997 enacted appropriation versus the fiscal year 1996 appropriation and planned actions of $27.5 million to address the reductions. The Modernization Transition Committee recommended that the Assistant Administrator for Weather Services approve over 80 certifications for consolidation, automation and/or closure of weather service offices. Memorandum from Assistant Administrator for Weather Services to NOAA Deputy Under Secretary stating that the “operational deficit” facing NWS in fiscal year 1997 is $36.3 million rather than $27.5 million. The memo also cites another $6.8 million in withheld adjustments to base which raises the “operating deficit” to $43.1 million. (continued) Department of Commerce press release on NWS’ plans to restructure operations to meet lower budget allocations discusses recommended actions, which include (1) accelerating planned reductions in staffing and operations at headquarters, regional offices, central operations and field offices and (2) reengineering certain programs, including accelerating plans for up to 200 reduction-in-force actions and closing the southern region headquarters office. Analysis of NWS “deficit” showing $42.2 million “deficit” and $42.2 million in actions to meet “deficit,” including $15.5 million in permanent and one-time reductions, $6.5 million in temporary reductions, and $9.7 million of reductions to absorb inflationary costs. Congressional briefing by NOAA and NWS officials on Base Operations-Fiscal Year 1998 Funding Restoration showing reductions of $42.2 million, composed of $27.5 reduction plus $9.7 million in inflationary costs plus $5 million in estimated personnel separation costs. The briefing also lists $42.2 million in actions taken to meet the reductions. Hearings before the House Committee on Appropriations Subcommittee on Commerce, Justice, and State, the Judiciary, and Related Agencies indicate the NOAA Under Secretary said NWS could fulfill its public safety mission with fiscal year 1997 funding. Assistant Administrator for Weather Services referred to components comprising a $27.5 million reduction and noted that NWS and all agencies must absorb 3-percent pay raises. The Assistant Administrator for Weather Services further noted that in an operational organization such as the weather service, ongoing degradation as a result of that ongoing absorption of pay raises will eventually lead to a much smaller weather service. Notification to House Appropriations Subcommittee on Commerce, Justice and State for $0.7 million reprogramming to NWS for the National Hurricane Center. NWS schedule on $7.9 million reprogramming request for fiscal year 1997 operations and research. Assistant Administrator for Weather Services forwarded over 80 certifications for consolidation, automation, and/or closure of weather service offices to NOAA Under Secretary for approval. E-mail message from Counselor to the NOAA Under Secretary to NWS Budget Analyst noting that the NWS reprogramming request for $7.9 million for fiscal year 1997 has not been acted upon yet by NOAA. NOAA Under Secretary and the Assistant Administrator for Weather Services testify before the Senate Subcommittee on Science, Technology and Space on the components of the $47.4 million “shortfall,” including NWS funding reductions and increased costs, such as pay-related costs and the cost of NOAA support and centralized services. E-mail message from Counselor to NOAA Under Secretary to NWS Senior Budget Analyst stating that the reprogramming package is with NOAA Deputy Under Secretary and is on its way to NOAA Under Secretary for signature. (continued) Memorandum from NOAA Chief Financial Officer to NOAA Under Secretary requesting a reprogramming for NWS of $7.9 million. Memorandum from NOAA Under Secretary to Department of Commerce Acting Chief Financial Officer requesting a reprogramming of $7.9 million for NWS. E-mail message to NWS Senior Budget Analyst stating that the NOAA reprogramming is under review and that hopefully there will be a Department of Commerce decision late the next week. E-mail from NWS Deputy Assistant Administrator for Operations to NOAA Deputy Under Secretary asking for guidance on how to proceed with vacancies. E-mail notes that on June 16 NOAA Deputy Under Secretary indicated that NWS would not be able to fill the vacancies because NWS still had a deficit on the books. E-mail notes that this was a surprise to NWS because of NOAA’s previous commitment to pay the separation costs. NWS Deputy Assistant Administrator for Modernization recommended in an e-mail to NOAA Deputy Under Secretary that the certification packages for approximately 27 offices be held back because of staffing vacancies. E-mail message from NWS’ Management and Budget Office Director to NWS Senior Budget Analysts stating that the Department has requested that the NOAA fiscal year 1997 reprogramming request be submitted 6/25/97. Department of Commerce press release notes that the Assistant Administrator for Weather Services is reassigned to other duties within NOAA. Reprogramming Notification from the Department of Commerce Acting Chief Financial Officer to the Congress on a $5.4 million reprogramming for NWS operations and research. House Committee on Appropriations, Subcommittee on Commerce, Justice, and State, the Judiciary, and Related Agencies, responded to Department of Commerce Acting Chief Financial Officer on the $5.4 million reprogramming notification for NWS that it has no objection to reprogramming $3.6 million from satellite programs for deferred maintenance and related activities, but directs the department to transfer the additional $1.8 million from the Economic Development Fund. E-mail message forwarded by Counselor to NOAA Under Secretary to NWS Senior Budget Analyst stating that the House and Senate have officially concurred with the $5.4 million reprogramming for NWS. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Helen Lew, Assistant Director Michael J. Curro, Assistant Director Joan B. Hawkins, Assistant Director James C. Houtz, Information Systems Analyst-in-Charge Laura E. Castro, Senior Evaluator David R. Fisher, Senior Auditor Paul F. Foderaro, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the key events related to the fiscal year (FY) 1997 budget shortfall of the National Weather Service (NWS), focusing on: (1) the formulation and execution of the NWS' FY 1997 budget; and (2) key events regarding NWS' FY 1997 budget shortfall and efforts to address it. GAO noted that: (1) based on guidance provided by the Department of Commerce and the Office of Management and Budget (OMB), the National Oceanic and Atmospheric Administration (NOAA) prepared a FY 1997 budget proposal for each of its components--including NWS; (2) the Department of Commerce reviewed this proposal and asked OMB to include $693 million for NWS in the President's budget; (3) based on OMB's direction regarding NOAA-wide and NWS-specific reductions, this request was revised to the $671 million that appeared in the President's budget submission to Congress; (4) Congress further reduced this amount, enacting appropriations that included $638 million in FY 1997; (5) although NWS believed it had a budget shortfall because of the reductions that OMB and Congress made to its FY 1997 budget request, as well as inflationary and other cost increases, NOAA and NWS reported varying amounts to Congress about the size of this shortfall; (6) according to NOAA and NWS officials, the information provided to Congress responded to specific questions asked at particular points in time and did not necessarily include all known elements of the shortfall; (7) NWS ultimately succeeded in staying within its FY 1997 budget level by implementing a number of temporary and permanent actions; (8) other events associated with the shortfall raised concerns among department officials and Congress; (9) the first event centered on a NWS reprogramming request to NOAA and NWS' intention to start filling critical field vacancies prior to receiving NOAA authorization; (10) NWS assumed that the reprogramming request would be approved by Commerce and funds would be available to fill these vacancies; (11) NOAA informed NWS that the vacancies could not be filled because the reprogramming request had not yet been approved; (12) the second event involved NWS' effort to obtain certification approval from NOAA to consolidate, automate, and close weather service offices; (13) upon learning that NWS would not be able to fill critical field vacancies, NWS recommended to NOAA that selected certification packages be held back because, according to NWS, this would have resulted in a degradation of weather services at locations; (14) however, Commerce noted that the certification packages, as submitted by NWS on April 22, 1997, did not indicate that there were vacancies in these offices that would preclude proceeding with certification; and (15) no link was made during this time between the ability to proceed with certification and the need for reprogramming approval by Congress.
Background Congress established the basic functions of USITC on September 8, 1916, as the U.S. Tariff Commission. In 1975, the name was changed to the U.S. International Trade Commission by section 171 of the Trade Act of 1974. USITC is headed by six Commissioners who are appointed by the President and confirmed by the Senate for terms of 9 years, unless appointed to fill an unexpired term. The terms are set by statute and are staggered so that a different term expires every 18 months. No more than three Commissioners may be members of the same political party. From among the appointed Commissioners in office, the President designates a Chairman and Vice Chairman to each serve for a 2-year term. The Chairman may not be of the same political party as the preceding Chairman, and the Chairman and the Vice Chairman may not be of the same political party. The Chairman is responsible, within statutory limits, for the administrative functions of USITC. The mission of USITC is to (1) administer U.S. trade remedy laws within its mandate; (2) provide the President, the U.S. Trade Representative, and Congress with independent high-quality analysis, information, and support on matters relating to tariffs and international trade and competitiveness; and (3) maintain the Harmonized Tariff Schedule of the United States (HTS). Through the Director of Operations, five offices—Offices of Economics, Industries, Investigations, Tariff Affairs and Trade Agreements, and Unfair Import Investigation—are responsible for USITC’s operations regarding international trade. See appendix II for additional information on USITC operations. The Inspector General Act of 1978 (IG Act), as amended, provides the legal foundation for the federal executive branch IG offices. Currently there are 59 IGs established by the IG Act throughout the executive branch with broad authority to conduct independent audits and investigations. Of the 59 IGs, the President, with the advice and consent of the Senate, appointed 29. These presidentially appointed IGs may only be removed by the President. The other 30 IG Offices were established by the 1988 amendments to the IG Act in designated federal entities (DFE) named in the legislation. The USITC IG is one of the 30 DFE IGs. Generally, the DFE IGs have the same authorities and responsibilities as those IGs established by the original IG Act of 1978; however, they are appointed and may be removed by their entity heads rather than by the President and are not subject to Senate confirmation. For purposes of the IG Act, the USITC Chairman was the head of USITC during our review period. The act provides the IGs with independence by authorizing them, among other things, to select and employ their own staffs, make such investigations and reports as they deem necessary, and report the results of their work directly to Congress. In addition, the IG Act provides the IGs with a right of access to information, and prohibits interference with IG audits or investigations by agency personnel. The act further provides the IGs with the duty to inform the Attorney General of suspected violations of federal criminal law. Congress passed the IG Reform Act of 2008 (IG Reform Act) to further enhance IG independence and accountability. The act maintains the framework and IG community that existed under the IG Act and adds authorities and requirements to help build a stronger, more independent, professional, and accountable IG community. The act requires that the heads of entities, including USITC, and the President, for those IGs appointed by the President, inform both houses of Congress 30 days before taking actions to remove or transfer an IG. The act also provides a statutory process for handling allegations of wrongdoing by IGs so that such reviews are not done by the same management officials who are subject to IG oversight. The IG Reform Act also specifies that DFE IGs, such as the USITC IG, be classified at a grade level or rank designation at or above those of a majority of the senior-level executives of the DFE. It requires the head of each DFE to transmit proposed budgets to the President with an aggregate request for the IG, amounts for IG training, and amounts for the support of the Council of the Inspectors General on Integrity and Efficiency (CIGIE). In addition, the IG is to provide certification that the amount requested satisfies all training requirements for the IG for that fiscal year and any resources necessary to support the activities of CIGIE. USITC IG Conducted Limited Oversight Activities during Fiscal Years 2005 through 2009 The IG Act requires IGs to provide independent audits and investigations of the programs, offices, and activities of their respective federal entities. However, during fiscal years 2005 through 2009 the USITC IG office did not conduct any audits, and provided no investigative case files or reports to indicate that any investigations had been performed. The IG office’s oversight of USITC consisted primarily of monitoring and reviewing the work of independent public accountants (IPA) who conducted mandatory audits of USITC’s financial statements and information security as required by specific statutes. The IPAs performed these audits under contract with the acting and temporary IGs during this 5-year period with no additional audits conducted by these IGs. The most recent peer review of the IG office’s audit quality, performed by the National Archives and Records Administration IG, concluded in a May 12, 2010, report that an opinion could not be expressed on the audit organization because no audits had been conducted in the past 5 years. The USITC IG office also did not provide audits or perform follow-up in areas with weaknesses identified by the IPAs’ audits. Specifically, the audit of USITC’s fiscal year 2009 financial statements resulted in a disclaimer of opinion by the IPA due to the lack of sufficient evidence to support the amounts presented in the financial statements. The IPA also noted a number of material and significant issues surrounding internal control and concluded that USITC was not able to comply with the requirements of the Federal Managers’ Financial Integrity Act. The Chairman and the Director of the Office of Finance stated that the audit results primarily stemmed from the implementation of a new financial system at the beginning of fiscal year 2009. However, the IG’s office had not perform audits or other oversight of the new system and its implementation o r related internal control. In USITC’s annual Performance and Accountability Report, issued for each year of our review, the IG office identified management challenges related to USITC’s procurement and contract management, financial management, human capital management plan, and budget and performance integration. Issues related to USITC’s financial management were annually audited through mandatory audits performed by IPAs. However, the remaining management challenges were not audited by the IG office, and therefore have not received audit recommendations for corrective actions to address these identified weaknesses. (See table 1.) Performance audits and other IG oversight activities can provide managers, and those charged with governance, with information regarding the economy, efficiency, and effectiveness of the programs, offices, and activities reviewed, and may include assessments of internal control, compliance, and prospective analyses. The USITC IG office’s limited oversight of the programs, offices, and activities responsible for the fundamental mission of USITC regarding international trade resulted in a lack of valuable audit information for management to help improve program performance and operations, reduce costs, facilitate decision making, oversee or initiate corrective action, and contribute to accountability. USITC Lacked an Appointed IG and Adequate Staff Resources Prior to Fiscal Year 2010 The IG Act requires designated entity heads to appoint an IG and provide adequate budgetary resources and sufficient staff for the IG’s office to conduct independent audits and investigations. USITC lacked an appointed IG and did not provide the IG office with adequate budget and staff resources for fiscal years 2005 through 2009. This contributed significantly to the IG office’s limited oversight of USITC and the lack of audits and investigations. However, in fiscal year 2010, the USITC Chairman appointed an IG and provided additional resources to the IG office due, in part, to the requirements of the IG Reform Act. The USITC IG Position Was Filled by Acting and Temporary IGs for an Extended Period before Appointment of the Current IG For over 4 years, between November 2005 and December 2009, the USITC relied on acting IGs and a temporary IG to provide oversight. In addition, for a period of 17 months during this time—from March 2006 until August 2007—the USITC IG position was vacant. Specifically, when the IG retired in October 31, 2005, the Chairman designated the Assistant IG for Audits (AIGA) to serve as acting IG. When the acting IG position expired in March 2006, the Chairman sought to hire a new IG instead of renewing the acting IG position. Although the USITC continued to rely upon the AIGA to fulfill the requirements and responsibilities of the IG, the IG position was vacant. The Chairman renewed the AIGA’s acting IG position in August 2007—17 months after it had expired. The AIGA reported the vacant IG position in each of the semiannual reports over this 17-month period. During this period, the AIGA did not have the full statutory protections of independence and stated authorities under the IG Act to provide audits and investigations; promote economy, efficiency, and effectiveness; prevent and detect fraud and abuse; and recommend actions for improvement to USITC. In January 2008, the Chairman selected a former USITC budget officer to serve as a temporary IG not to exceed 6 months of service, which was extended for another 6 months starting in June 2008. In January 2009, USITC extended the temporary IG position for another 6 months while the Chairman and Commissioners studied how to implement the IG Reform Act, which requires that DFE IGs, such as the USITC IG, be classified at a pay level at or above a majority of the senior-level executives of the DFE. In June 2009, USITC published a vacancy announcement for a permanent IG position at the Senior Executive Service (SES) level. The temporary IG was reassigned to another USITC office on August 16, 2009, and the AIGA from the IG Office of the Commodity Futures Trading Commission served as the acting USITC IG from August 17, 2009, to December 5, 2009. On December 6, 2009, the Chairman appointed the first Senior Executive Service (SES)–level USITC IG. See table 2 for a listing of the USITC IGs and their periods of service. USITC IG Received Limited Resources Prior to Fiscal Year 2010 The only specific budget resources provided to the IG office during fiscal years 2005 through 2009 were the amounts for statutorily mandated audits performed by IPAs. Despite increases in the overall USITC budget, the IG office’s budget resources remained relatively flat and its staffing remained below its authorized levels. Between fiscal years 2005 and 2009, the USITC budget increased from $61 million to $75 million—approximately 23 percent—while the IG budget remained relatively constant. (See table 3.) A former acting IG stated that in order to perform any additional functions, including travel or training, she had to seek USITC’s permission for each additional expense. Although the acting and temporary IGs were authorized to have between three and four full-time equivalent (FTE) staff members (including themselves) during 4 of the 5 years reviewed, their offices did not receive the necessary funding to hire these authorized staff. The former acting and temporary IGs we contacted also explained that their oversight of USITC was limited because they did not have sufficient resources to audit areas other than those required by specific statutes. Ensuring the adequacy of audit resources is ultimately a responsibility of the USITC Chairman. However, the USITC Commissioners told us that in the past they were not made fully aware of the IG office’s need for additional resources. The acting and temporary IGs had not prepared comprehensive audit plans over the 5-year period with a staffing analysis to justify additional budget and staffing resources and effectively communicate their resource needs. As part of a comprehensive audit plan, a staffing analysis provides the basis for determining the number and experience level of the audit staff needed, external service providers, financial support, technology-based audit techniques, and other resource needs such as training and travel. Consistent with provisions of the IG Reform Act, the USITC budget request for fiscal year 2010 included IG budget information and the required IG certification that the amounts are sufficient for training and support of CIGIE activities. On the basis of discussions with the current IG, USITC approved an IG budget of $816,837 with a total of 5 FTEs, including a legal counsel who is also expected to conduct investigations. With the assistance of the additional staff, the IG issued a series of audit reports with recommendations regarding information security and internal control. The current IG stated that future oversight may require additional resources. During our review, the current IG completed a strategic plan and an annual audit plan for fiscal year 2011. These plans define the USITC audit universe, provide goals for oversight, and specify the objectives and anticipated starting dates for individual audits including mandatory audits, audits of management challenges, and audits of program economy and efficiency. While these plans are an important first step, neither the high- level strategic plan nor the annual audit plan for the coming year provide a staffing analysis to identify the number of staff and other resources necessary for a comprehensive audit plan that communicates and justifies the IG budgets and staffing needed for USITC oversight. Improvements Needed to Strengthen the IG’s Role within USITC Governance The IG Act provides each IG with protections of independence including the authority for access to all entity documents and records, and does not allow the entity head to prevent or prohibit the IG from initiating, carrying out, or completing any audit or investigation. In addition, the IG is required to refer cases with potential violations of federal criminal law to the Attorney General. These protections and responsibilities are necessary in large part because of the unique reporting requirements of the IGs, who are both subject to the general supervision of the heads of the agencies they audit while at the same time expected to provide reports of their work externally to Congress. During our review period, we found instances where USITC’s governance structure did not fully support the acting and temporary IGs’ responsibilities due to USITC’s lack of clear policies surrounding IG access to information and the lack of coordination with the IG office when referring an investigative case to the Department of Justice (DOJ). The effectiveness and independence of IG’s are closely related to the governance and accountability structure of the organization and the role that the IG plays within that structure. IGs must be able to operate independently within the governance framework at their respective entities in order to be effective. USITC Needs Clarifying Guidance on IG Access to Documents The IG Act provides the authority for IG access to all USITC documents and records, and also prohibits the agency head from preventing or prohibiting the IG from initiating, carrying out, or completing any audit or investigation. However, in 2009, the USITC IG was unable to obtain prompt access to original USITC contract documents during an inquiry into USITC procurement procedures because of the Chairman’s uncertainty about the IG’s authority to have access. In the April 2009 semiannual report to Congress, the temporary USITC IG stated that on March 5, 2009, a USITC employee removed certain procurement files from the possession of the IG without the IG’s permission. The employee had requested clarification from USITC management regarding the IG’s access to the documents that the procurement office was responsible for safeguarding, but received no clear answer. Due to the lack of guidance from USITC management and the lack of a clear policy on IG access to documents, the IG’s review was delayed until the issue was resolved, and the IG’s inquiry ended without any record of an investigation. Although USITC policies and procedures provide the IG with full access to all USITC documents, they do not specify the process to be followed to grant the IG access to original sensitive USITC documents that must be safeguarded. In the example cited above, after deliberations with the General Counsel, the Chairman provided the temporary IG access to the documents after a delay of almost 2 months. However, the Chairman included a written qualification that the IG’s full access to USITC documents only applied to the specific files under the IG’s immediate review. Consequently, future disagreements regarding the IG’s access to USITC documents may occur until the IG’s authority is specifically addressed by USITC policies and procedures. The USITC program directors that we interviewed expressed their concerns for the safety and security of the business and trade information used during their international trade investigations and in the preparation of their reports. However, IG access to both information and individuals is essential for effective oversight and the IG Act specifically authorizes the IG to have access to all records, reports, audits, reviews, documents, papers, recommendations, or other material related to the programs and operations of an entity. In addition, the Commissioners told us that although USITC provides an orientation book to inform newly appointed Commissioners about USITC’s operations, this orientation information does not include a section on the authority and responsibilities of the IG. Because Commissioners are not always appointed from prior federal positions and may not be aware of the important statutory independence of IGs, an orientation book could include information to facilitate interactions with the IG by minimizing uncertainties regarding the unique authorities and responsibilities of the IG. The USITC Chairman Did Not Always Coordinate Investigations with the IG Office The IG Act requires the IG to report to the Attorney General whenever the IG has reasonable grounds to believe federal criminal law has been violated. The USITC Chairman also reports potential criminal violations to DOJ. On June 15, 2009, the USITC Chairman referred a possible criminal violation by a USITC employee to the Criminal Division of DOJ based on an investigation conducted by USITC’s Chief Information Security Officer. The USTIC Chairman neither informed the temporary IG of the investigation performed by USITC, nor of the referral of the case to DOJ. The lack of coordination could result in the duplication of efforts if both the Chairman and the IG were to investigate the same subject. In order to avoid duplication of investigative efforts, other federal entities utilize a memorandum of understanding (MOU) or similar mechanism to require the sharing of investigative information between the IG and the entity. For example, the Treasury IG for Tax Administration (TIGTA) at the Internal Revenue Service (IRS) within the Department of the Treasury and the Chief of IRS Criminal Investigation have established an MOU that specifies the areas to be investigated by each office to ensure coordination while preventing duplication of efforts. A similar agreement between the USITC Chairman and the IG could decrease the potential risk of duplicative investigations. Conclusions Considering the need to enhance oversight of USITC, it is important that an independent, objective, and reliable IG structure be in place to provide adequate audit and investigative coverage of its programs, offices, and activities. Effective USITC governance and accountability require policies and procedures to help ensure that the activities of the IG are independent and the results are viewed as independent by Congress and other users of the IG’s work. USITC has recently made progress towards improving governance and accountability; however, notwithstanding these advances, we believe it is important to build on and sustain the progress made in fiscal year 2010. Increased attention to USITC governance and accountability through the design and implementation of policies and procedures, and ongoing attention to the resource needs of the IG’s office, would help to ensure that the activities of the IG are effective and independent. Recommendations for Executive Action We recommend that the USITC IG prepare a staffing analysis to determine the level of budget and staff resources needed to conduct the audits identified in audit plans, including audits required by statutes; audits of management challenges identified by the IG; and performance audits of economy, efficiency, and effectiveness of USITC’s programs, offices, and activities. We recommend that the Chairman of USITC revise the policies and procedures for all offices and programs to recognize the authorities and responsibilities of the IG under the IG Act, including procedures for recognizing the IG’s authority for access to USITC documents, records, and information; revise the formal written orientation information provided to the Commissioners to include sections on the overall authorities and responsibilities of the IG; the IG’s authority and USITC’s policies for IG access to USITC documents, records, and information; and the responsibilities of the Chairman to maintain an appointed IG; and work with the IG to establish a memorandum of understanding (MOU) or similar mechanism to ensure that all USITC investigative matters that may cover areas also investigated by the IG are coordinated with the IG’s office. Agency Comments and Our Evaluation We received written comments on a draft of this report from the USITC Chairman, which are reprinted in appendix III. The USITC Chairman stated that the agency is dedicated to ensuring the proper level of IG oversight and looked forward to working with the IG to achieve enhanced performance and accountability throughout USITC. The Chairman concurred with our recommendations and identified actions taken to implement them. We agree that USITC has fully addressed our recommendation to establish an MOU to ensure the agency and IG coordination of investigations of possible criminal violations; however, while USITC has taken steps to address the remaining recommendations, further corrective actions are necessary. Specifically, the USITC IG prepared a staffing analysis for fiscal year 2010 audits that allowed him to hire additional staff; however, we continue to recommend that the IG develop a staffing analysis to support the audits in future years identified by the IG’s strategic plan. Also, although USITC prepared an MOU addressing the IG’s authorities and responsibilities regarding access and custody of USITC records, USITC has yet to include this information in the policies and procedures for the offices subject to the IG’s review. Further, USITC prepared an overview of the IG’s authority and responsibilities to be included in the orientation of the USITC Commissioners, but has not yet provided a place for it in the formal orientation of the Commissioners. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time we will send copies of the report to the USITC Chairman; USITC IG; Deputy Director for Management, Office of Management and Budget; Chairman of the Senate Committee on Finance; and other parties. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions or would like to discuss this report, please contact me at (202) 512-9095 or raglands@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Appendix I: Scope and Methodology To determine the extent of oversight provided by the U.S. International Trade Commission (USITC) Inspectors General (IG) during fiscal years 2005 through 2009, we obtained and reviewed the results of the IGs’ audit reports and investigative activity as reported in the IGs’ semiannual reports to Congress for the 5-year period. We also identified the management challenges identified by the USITC IGs over the 5-year period. In addition, we compared IG audit activity to these reports over the same 5-year period to determine the nature and extent of oversight provided in areas of identified weaknesses. We further identified the programs and offices responsible for carrying out USITC’s mission from relevant performance and accountability reports and compared them with the areas covered by the IG’s audits. We analyzed the budgets and staffing resources for the USITC IGs for fiscal years 2005 through 2009 by reviewing IG planning documents and budget requests to USITC management as well as internal entity budget documents. We also obtained USITC’s overall budget and staffing from the President’s Budget of the United States Government and compared USITC’s budgets for the 5-year period to the IG’s budgets. We also interviewed the current and former acting and temporary IGs who were in office from November 1, 2005, to the present time to gain an understanding of the conditions of their employment, obtained Office of Personnel Management documents to verify their employment status, and gained an understanding of the level of audit oversight provided based on available resources. To determine how the role of the IG is addressed in the governance and management of USITC, we reviewed existing policies and procedures regarding the governance and management of USITC for accountability and regarding the IG; interviewed the Commissioners; and obtained information from the former acting and temporary IGs as well as the current IG. We also reviewed the statutory roles and responsibilities of the IG for independent audits and investigations as provided by the IG Act and noted where the activities of USITC governance were not consistent with the independence principles of the act. We reviewed the activities of USITC management regarding requirements for IG access to information by analyzing information obtained through interviews with USITC program directors, the former IGs, the USITC Chairman, former Chairmen, and the Commissioners. We also reviewed internal USITC documents related to the deliberation of the roles and responsibilities of the IGs. To obtain information about the investigative case referred to the Department of Justice (DOJ), we interviewed the IG Counsel at the Treasury IG for Tax Administration, Internal Revenue Service, who provided legal counsel to the USITC IG office. We conducted this performance audit from November 2009 to October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Major USITC Operations, Offices, and Roles and Responsibilities Information about the structure and activities of the U.S. International Trade Commission (USITC) is shown in table 4. Additional USITC offices support the work of the five major operations shown in table 4: The Office of the Administrative Law Judges holds hearings and makes initial determinations in investigations of intellectual property–based imports. The Office of the General Counsel serves as the chief legal advisor. The Office of the Director of Operations provides supervision of USITC offices that provide the five major operations. The Office of External Relations develops and maintains a liaison between USITC and its diverse external customers. The Office of the Chief Information Officer provides information technology leadership including information security. The Office of the Director of Administration compiles and monitors the annual budget, and provides services for human capital, procurement, facilities management, and physical security. The Office of Equal Employment Opportunity administers the affirmative action program and advises the Chairman and the Commissioners on equal employment issues. The Office of the Secretary coordinates hearings and meetings and is responsible for official record keeping. Appendix III: Comments from the Chairman, U.S. International Trade Commission Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Jackson Hufnagle, Assistant Director; Clarence Whitt; Francis Dymond; Jacquelyn Hamilton; and Cynthia Jackson made key contributions to this report.
Inspectors general (IG) are to provide independent and objective oversight; however, the United States International Trade Commission (USITC) has relied on acting and temporary IGs for an extended period of time. GAO was asked to determine (1) the extent of oversight provided by the USITC IG, (2) the budget and staffing resources available for oversight, and (3) how the role of the IG is addressed in the governance of USITC. To accomplish these objectives, GAO reviewed USITC IG reports and budgets for fiscal years 2005 through 2009, and relevant policies and procedures regarding governance and accountability. GAO also interviewed the USITC Chairman, Commissioners, current and former acting and temporary IGs, and office directors. The IG Act of 1978, as amended (IG Act), requires IGs to provide independent audits and investigations of the programs, offices, and activities in their respective federal entities. However, during the 5-year period reviewed, the USITC IG office conducted no audits and had no investigative case files or investigative reports of USITC. The IG office's oversight activities consisted primarily of monitoring and reviewing the work of independent public accountants (IPA) who conducted annual mandatory audits of USITC's financial statements and information security programs and practices. The most recent peer review of the USITC IG office's audit quality concluded that an opinion could not be rendered on the audit organization because no audits had been conducted by the IG in the past 5 years. The IG Act requires the designated federal entity heads to appoint an IG and provide adequate budgetary resources and sufficient staff. Both the lack of an appointed IG and constrained IG office budgets and staffing resources contributed to the limited oversight of USITC. From November 1, 2005, through December 5, 2009, USITC relied on acting IGs and a temporary IG to provide oversight. During this period the IG position was vacant for 17 months with no acting or temporary IG while USITC relied on the Assistant IG for Audits to provide oversight. Between fiscal years 2005 and 2009, the USITC budget increased about 23 percent, but the IG office budget resources remained relatively flat with funds only available for IPA-conducted audits and two staff during the last 4 years reviewed. The lack of comprehensive audit plans by the acting and temporary IGs to fully communicate their resource needs to USITC contributed to inadequate IG office resources and resulted in limited oversight. In fiscal year 2010, USITC appointed a Senior Executive Service-level IG to address requirements of the IG Reform Act of 2008. Also, consistent with the act, USITC provided a fiscal year 2010 IG office budget based on discussions with the current IG, which increased staffing and was certified by the IG as adequate. The IG stated that future oversight may require additional resources, which we believe can be communicated and justified by a staffing analysis as part of IG audit planning. The IG Act provides each IG with protections of independence including the authority for access to all entity documents and records. In addition, the IG is required to refer cases with potential violations of federal criminal law to the Attorney General. We found instances where the governance structure did not fully support the temporary USITC IG's responsibilities. Specifically, during 2009, the temporary IG was unable to obtain timely access to sensitive contract documents because USITC's policies and procedures did not clearly provide for IG access to such documents. The orientation book for the Commissioners, who may not have prior federal service, does not contain information about the USITC IG's authorities and responsibilities. In another instance, due to the lack of a formal policy or other agreement with the IG office, the Chairman referred the results of a possible criminal investigation to the Department of Justice (DOJ) without coordinating with the temporary IG, resulting in the potential for duplication of investigative efforts.
Background Wetlands include swamps, marshes, bogs, and similar areas. They are characterized by three factors: (1) frequent or prolonged presence of water at or near the soil surface, (2) hydric soils that form under flooded or saturated conditions, and (3) plants that are adapted to live in these types of soils. Wetlands are found throughout the United States. They may differ greatly in their physical characteristics; for example, water may not be present on the wetland for part of the year or it may be present year-round. Figures 1 and 2 show two different types of wetlands—a marsh and a bayou. Wetlands provide many important functions for the environment and for society. For example, wetlands improve water quality by removing excess nutrients from sources such as fertilizer applied to agricultural land and municipal sewage and by trapping other pollutants in soil particles; reduce the harmful effects of weather events by storing flood waters and buffering roads and houses from the storm surges caused by hurricanes; and provide important habitat for plants and wildlife—more than one-third of threatened and endangered species, such as the whooping crane and Florida panther, live in wetlands. Over half of the estimated 220 million acres of wetlands in the contiguous United States during colonial times have disappeared, and many of the remaining wetlands have been degraded. This loss in wetlands was primarily caused by agricultural activities and development; significant wetland loss continued through the mid-1970s. While the economic pressure to develop wetlands continues today, according to the U.S. Fish and Wildlife Service, the rate of wetland loss has decreased significantly over the past 30 years. The decrease in the rate of wetlands loss stems from executive actions and legislation, prompted by an increased recognition of the benefits of wetlands. In 1977, the first executive order for the protection of wetlands directed federal agencies to take action to minimize the destruction of wetlands and to preserve and enhance wetlands’ benefits when carrying out responsibilities such as managing federal lands and facilities or providing federally financed construction. Subsequently, in 1989, the administration set a national goal of no net loss of wetlands to ensure that these valuable resources are protected. The Clean Water Act provides the primary legislative authority for federal efforts to regulate wetlands and other waters of the United States. The act’s objective is to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. The section 404 program under the Clean Water Act is the principal federal program that provides regulatory protection for wetlands. Section 404 generally prohibits the discharge of dredged or fill material in waters of the United States, which include certain wetlands, without a permit from the Corps. Responsibility for issuing these permits is delegated to 38 Corps district offices. The Corps requires the permittee to first avoid discharges of dredged or fill materials into wetlands and then to minimize discharges that cannot be avoided. To replace lost wetland functions, the Corps can require compensatory mitigation as a condition of issuing a permit when damage or degradation of wetlands is unavoidable. Compensatory mitigation can consist of creating a new wetland, restoring a former wetland, enhancing a degraded wetland, or preserving an existing wetland. According to Corps guidance, compensatory mitigation should generally provide, at a minimum, one-to-one functional replacement for a lost wetland. When determining the type, size, and nature of compensatory mitigation to be performed, district officials may consider factors such as the wetland’s location, the rarity of the ecosystem, water levels, vegetation, wildlife usage, and the presence of endangered species. In some cases, the loss of the functions of a certain wetland area may be offset by either a larger or a smaller wetland area. For example, on an acreage basis, the ratio should be greater than one-to-one when the lost wetland functions are high and the replacement wetlands provide lower functions. In the absence of information about the functions of a certain site, acreage may be used instead to determine the amount of compensatory mitigation to help achieve the national goal of no net loss. Figure 3 shows land before and after a wetland restoration project. Compensatory mitigation may be performed by permittees or third parties. Third-party mitigation is typically performed by mitigation banks, which are generally private for-profit entities that establish wetlands under agreements with the Corps, or under in-lieu-fee arrangements, which are often sponsored by public or nonprofit entities. Under mitigation banking guidance issued in 1995 and in-lieu-fee guidance issued in 2000, mitigation bank and in-lieu-fee sponsors should have formal, written agreements with the Corps, developed in consultation with EPA and other resource agencies such as the U.S. Fish and Wildlife Service, to provide frameworks for how the mitigation bank or in-lieu-fee arrangement will operate. According to Corps guidance, these written agreements should include information on the mitigation site, including the types of wetlands to be developed, the conditions of any existing wetlands, and the geographic area; and site management, such as monitoring plans and reporting protocols on the progress of the remedial actions and the parties responsible for performing them if the mitigation is not successful, accounting procedures for tracking payments received from permittees, performance standards for determining ecological success of the provisions for long-term management and maintenance. The Corps and EPA, which have joint enforcement authorities for the section 404 program, established a memorandum of agreement allocating enforcement responsibilities between the two agencies. According to this agreement, the Corps is the lead enforcement agency for all violations of Corps-issued permits, while EPA is the lead enforcement agency when unpermitted activities occur in wetlands. Historically, the Corps has not emphasized enforcement activities. In 1988, we reported that many Corps permits were not monitored for compliance with permit conditions, the Corps districts we visited at that time did not place a high priority on detecting unauthorized impacts to wetlands, and the frequent lack of monitoring could result in the loss of valuable wetland resources. Subsequently, in 1993, we reported that the Corps continued to emphasize permit processing over compliance and enforcement and that funding and staffing shortfalls had inhibited the Corps’ and EPA’s compliance and enforcement activities. More recently, the National Research Council, environmental groups, and others have noted the same lack of emphasis on monitoring and enforcement. Corps Guidance for Oversight of Compensatory Mitigation Is Sometimes Vague or Internally Inconsistent The Corps has developed guidance that establishes two primary activities for oversight of compensatory mitigation performed by permittees or third parties. The guidance directs Corps districts to require that permittees performing compensatory mitigation periodically submit monitoring reports that provide information on the status of their mitigation efforts. For mitigation banks and in-lieu-fee arrangements, the guidance directs Corps districts to require sponsors to submit annual monitoring reports. The guidance also suggests that district staff conduct annual on-site inspections of mitigation bank activities but does not specify a frequency for inspections of mitigation activities performed by permittees and in-lieu- fee sponsors. However, we found that parts of the guidance are vague or internally inconsistent, thus limiting their usefulness. Corps Guidance Establishes Two Primary Oversight Activities for Compensatory Mitigation The Corps has three primary guidance documents that establish requirements for overseeing compensatory mitigation performed by permittees, mitigation banks, or in-lieu-fee arrangements: (1) The 1999 Army Corps of Engineers Standard Operating Procedures for the Regulatory Program; (2) The Federal Guidance for the Establishment, Use and Operation of Mitigation Banks; and (3) The Federal Guidance on the Use of In-Lieu-Fee Arrangements for Compensatory Mitigation Under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act. The two primary oversight activities these guidance documents establish are (1) Corps review of monitoring reports submitted by permittees or third parties and (2) the conduct of compliance inspections (field visits) that provide firsthand knowledge of the status of the mitigation. The guidance documents lay out the following requirements: 1999 Standard Operating Procedures for the Regulatory Program. This document, which highlights current Corps policies and procedures and provides guidance to the districts for setting priorities for their regulatory program activities, calls for Corps districts to require permittees to submit periodic monitoring reports and states that the districts should review all monitoring reports. It also states that compliance inspections are essential to ensure that compensatory mitigation is performed and directs Corps districts to inspect a relatively high percentage of compensatory mitigation performed by permittees to ensure compliance with permit conditions. Districts are to inspect all mitigation banks to ensure compliance with the banking agreement. Federal Guidance for the Establishment, Use and Operation of Mitigation Banks. Developed to provide guidance for establishing, using, and operating mitigation banks, this federal guidance directs the Corps to require that mitigation bank sponsors submit annual monitoring reports to the Corps and other authorizing agencies, which can include the EPA and the U.S. Fish and Wildlife Service, among others. Typically, mitigation banks are to be monitored for 5 years; however, according to the guidance, it may be necessary to extend this period for mitigation banks that require more time to reach a stable condition or that have undertaken remedial activities. In addition, the guidance encourages members of the mitigation banking review team, which the Corps chairs, to conduct regular (e.g., annual) on-site inspections, as appropriate, to monitor bank performance. Federal Guidance on the Use of In-Lieu-Fee Arrangements for Compensatory Mitigation Under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act. This federal guidance was developed to ensure that in-lieu-fee arrangements can serve as an effective and useful mitigation approach. The guidance specifies that there should be appropriate schedules established for regular (e.g., annual) monitoring reports to document funds received, impacts permitted, funds disbursed, types of projects funded, and the success of projects conducted. Furthermore, the guidance calls for the Corps in conjunction with other federal and state agencies to evaluate these reports and conduct regular reviews to ensure that the arrangement is operating effectively and is consistent with agency policy and the specific agreement. Corps Guidance Is Sometimes Vague or Internally Inconsistent Although Corps guidance documents establish monitoring reports and compliance inspections as the two primary oversight activities for compensatory mitigation, these guidance documents are sometimes vague or internally inconsistent. Specifically, the guidance is vague on the following key points: The circumstances under which monitoring reports should be required. Although the Corps’ standard operating procedures call for district officials to require and review monitoring reports for mitigation banks and “other substantial mitigation,” it does not define substantial mitigation. We found that Corps districts differed in how they defined “substantial mitigation.” For example, two districts require mitigation reports when the mitigation involves restoring, enhancing, or creating a wetland but not when the mitigation involves preserving a wetland. Another district interpreted “substantial” mitigation to include mitigation projects that generally involved more than one-half acre. The actions district officials should take if reports are not submitted as required. Corps guidance does not address the issue of noncompliance if monitoring reports are not submitted for review. For the files that we reviewed, we found that monitoring reports were provided for 44 percent, or 68 of the 155 cases in which these reports were required. District officials told us that, because of budget constraints, little time is spent on compliance activities, including following up on the submission of monitoring reports. While three districts that we visited have established a process for tracking due dates for monitoring reports from either permittees or third parties, none of the districts had a system for tracking reports from both. Without such tracking systems, a district official told us that Corps officials may not realize when monitoring reports are due or that the reports were not submitted as required. The information that should be included in a monitoring report. The guidance does not specify what information should be included in monitoring reports submitted by permittees and mitigation banks, despite the importance of these reports as a primary means of overseeing compliance with mitigation requirements. We found that some monitoring reports were only a few pages in length and provided limited information about the site, while other reports were over 50 pages in length, were more comprehensive, and included data on the water levels at the mitigation site, the plants growing at the site, methods for monitoring both the water levels and plant growth, documentation of animals present at the site, and photographs of the site. The Chief of the Regulatory Branch acknowledged that the information submitted in monitoring reports varies significantly and may not always provide the details needed to assess the status of the compensatory mitigation. Furthermore, the guidance is internally inconsistent about the emphasis districts should place on compliance inspections. The Corps’ standard operating procedures state that compliance inspections are essential, and districts should inspect a relatively high percentage of compensatory mitigation sites to ensure compliance with permit conditions, although they do not define what high means. The mitigation banking guidance states that districts should inspect all mitigation performed by banks annually to ensure compliance with the banking agreement. The in-lieu- fee guidance does not specify how often compliance inspections should be conducted. However, the standard operating procedures also designate all compliance inspections as a low-priority activity, to be performed only if the goals for other higher-priority work, such as issuing permits, have been achieved. Furthermore, the guidance states that the degree to which districts perform lower priority work would affect whether districts received additional resources. District officials told us that in the past they were instructed that if they spent too many resources on low-priority activities, their budget would be reduced. Consequently, a number of district officials told us that they are unsure of how much time to spend on compliance inspections. According to officials in one district we visited, for instance, the number of sites they were inspecting was based on a target set in the 1991 guidance because the current guidance is not as specific. Other districts do not have a specific goal for the number of inspections that district officials will conduct for mitigation activities. The Corps is revising its standard operating procedures to include specific performance goals for compliance inspections. Corps officials told us they expect to finalize the revised standard operating procedures by fall of 2005. Corps Districts Perform Limited Oversight of Compensatory Mitigation Overall, the Corps districts we visited have performed only limited oversight of compensatory mitigation undertaken by permittees and third parties. For the 152 individual permit files that we reviewed, we frequently found little evidence that the required monitoring reports were submitted or that the Corps conducted compliance inspections. Although Corps districts provided somewhat more oversight for mitigation performed by the 85 mitigation banks and 12 in-lieu-fee arrangements that we reviewed, we found that oversight was still limited even in these cases. Detailed results of our file review by district are presented in appendix III. Corps Districts Provide Little Oversight of Mitigation Performed by Permittees According to our review of 152 permit files where the permittee was responsible for performing the compensatory mitigation, the Corps districts generally provided little oversight either through a monitoring report or a compliance inspection. The Corps required permittees to submit monitoring reports for 89 of the 152 permit files that we reviewed. This ranged from a low of zero in Charleston to a high of 100 percent in Seattle. However, we found only 21 files contained evidence that the Corps actually received these required reports, ranging from a low of zero in two districts to a high of 69 percent in Jacksonville. Furthermore, only 15 percent, or 23 of the 152 permit files, showed that the Corps had conducted a compliance inspection. The actual proportion of permits receiving oversight may be less because several districts could not locate some of the permit files that we requested for review. The following cases illustrate situations in which the Corps required compensatory mitigation as a condition of permit issuance, but the files contained no evidence that the Corps had conducted oversight: In November 1999, the Corps issued a permit authorizing a permittee to install two boat slips and dredge approximately 5,270 feet of a canal in Louisiana, which would affect marsh and other wetland areas. As a condition of issuing this permit, the Corps required the permittee to use the dredge material and establish wetland plants to create a 710-acre intertidal marsh. The Corps also required the permittee to submit annual monitoring reports for 5 years. The file contained no evidence that the Corps had received any monitoring reports or conducted compliance inspections to determine the status of the required mitigation. In May 2000, the Corps issued a permit authorizing a developer to fill over 430 acres of wetlands to build a residential golf community in Florida. As a condition of issuing this permit, the Corps required the permittee to enhance over 1,000 acres of wetlands and to create 13 acres of wetlands. The Corps also required the permittee to submit annual monitoring reports for 5 years. The file contained no evidence that the Corps had conducted any compliance inspections or received any monitoring reports to determine the status of the required mitigation. In May 2000, the Corps issued a permit authorizing a permittee to fill 77 acres for a landfill in Texas. As a condition of issuing this permit, the Corps required the permittee to create 122 acres of prairie wetlands and to preserve 58 acres of wetlands on-site. The preservation area also included lakes and uplands that were to be managed for wildlife habitat. The Corps required the permittee to submit monitoring reports after 6 months and annually for 5 years. The file contained no evidence that the Corps had conducted any compliance inspections or received any monitoring reports to determine the status of the required mitigation. Moreover, even when Corps officials conducted oversight, they did not always perform suggested follow-up. For example, in one permit file we reviewed, the Corps issued a permit in December 1999 that authorized the excavation of an approximately 15-acre sand and gravel mining project in a wetland area. The Corps required the permittee to restore the mining area to a wetland plant community as the excavation occurred and to submit annual monitoring reports on the progress of this restoration effort. The permittee submitted one report to the Corps in March 2000, which stated that the work authorized by the permit had begun but that compensatory mitigation activities could not be completed until excavation was completed. No other monitoring reports were in the file, and the file did not contain any evidence that Corps officials had followed up to determine if the compensatory mitigation was performed. Another file indicated that, in December 2000, a Corps official had inspected a project site to assess the status of the required compensatory mitigation for a permit issued in August 2000. This permit authorized filling about 6 acres of wetlands to build a retail facility. The official’s inspection indicated that construction was almost finished, but the mitigation to enhance 4 acres of wetlands was still under way. The official recommended that the site be revisited at a later date. However, the file contained no evidence that the Corps conducted a follow-up compliance inspection or contacted the permittee to determine the status of the mitigation. Corps Districts Perform Somewhat Greater Oversight of Mitigation Performed by Mitigation Banks and In-Lieu-Fee Sponsors Corps districts provided somewhat more oversight for mitigation conducted by third parties, although even in these cases oversight was limited. Of the 85 mitigation banks that we reviewed, the Corps required that 71 percent, or 60 of the 85 mitigation bank sponsors, submit monitoring reports and 70 percent, or 42 mitigation bank files, contained evidence that at least one monitoring report had been received. However, only 31 of the 85 mitigation bank files contained evidence that the Corps conducted a compliance inspection. This ranged from a low of 13 percent in the St. Paul district to a high of 78 percent in the Wilmington district. The following cases illustrate situations where files contained no evidence that the Corps had conducted oversight of the mitigation bank: In February 1999, the Corps approved a mitigation bank in Texas that preserved and protected about 540 acres of swamp. The agreement between the Corps and the mitigation bank sponsor included a requirement that the sponsor submit an annual report on the mitigation bank’s status of operation and maintenance. The file contained no evidence of any monitoring reports submitted by the sponsor or compliance inspections conducted by the Corps. In August 1999, the Corps approved an approximately 360-acre mitigation bank in Louisiana to reestablish a productive, coastal, forested wetland ecosystem on previously converted agricultural lands. The agreement between the Corps and the mitigation bank sponsor included a requirement that the sponsor provide the Corps with annual monitoring reports for at least 5 years and then reports once every 5 years. The file contained no evidence of any monitoring reports submitted by the sponsor or compliance inspections conducted by the Corps. In December 2001, the Corps approved a 2,100-acre mitigation bank in Florida to restore native tree species and enhance the site’s hydrology. The agreement between the Corps and the mitigation bank sponsor required the sponsor to submit annual monitoring reports to the Corps for 4 years. The file contained no evidence of any monitoring reports submitted by the sponsor or compliance inspections conducted by the Corps. For in-lieu-fee arrangements, the Corps required the sponsors of 6 of the 12 in-lieu-fee arrangements that we reviewed to submit monitoring reports. We found that five of the six files contained evidence that the sponsor had submitted at least one report. We also found that the Corps had received monitoring reports from one in-lieu-fee sponsor who was not required to submit a report. In addition, the files contained evidence that the Corps had conducted at least one compliance inspection for 5 of the 12 arrangements. Conflicting Guidance and Limited Resources Contribute to the Corps’ Low Level of Oversight of Compensatory Mitigation District officials told us that the Corps’ conflicting guidance, which notes that compliance inspections are crucial but makes them a low priority, as well as limited resources contribute to their low level of oversight of compensatory mitigation activities. According to the Chief of the Regulatory Branch, historically, districts were to issue permits within specified time frames. If those time frames were not met, work in other areas, including compliance, was not to be performed. In addition, funds were allocated primarily for permit processing, with little remaining for other activities. However, Corps headquarters and district officials recognize the importance of oversight. They stated that without a comprehensive oversight program the Corps cannot ensure that compensatory mitigation will occur. In the absence of additional national guidance and resources, some of the districts we visited have decided to take their own steps to improve oversight. For example, Jacksonville district officials increased their compliance inspections of compensatory mitigation performed by permittees; the number of inspections more than tripled from 2003 to 2004 after several years of decline. In addition, New Orleans district officials told us that, in 2003, they began tracking monitoring reports and compliance inspections for mitigation banks, more aggressively followed up to ensure that the mitigation banks submit required monitoring reports, and increased the number of compliance inspections of the mitigation banks. Corps Districts Can Take a Variety of Enforcement Actions to Resolve Violations but Rely Primarily on Negotiation The Corps can take a variety of enforcement actions if required compensatory mitigation is not performed. Possible enforcement actions include issuing compliance orders and assessing administrative penalties, requiring the permittee to forfeit a bond, suspending or revoking a permit, and implementing the enforcement provisions of agreements with third parties to perform mitigation on permittees’ behalf. In addition, the Corps may refer a case to the Department of Justice to bring legal action in federal district court. However, district officials rarely use these enforcement actions, relying primarily on negotiation with permittees or third parties as a first step in the enforcement process to resolve any noncompliance cases they detect. In some cases, district officials want to pursue enforcement actions after detecting instances of noncompliance, but they may not be able to do so because they have limited their enforcement capabilities by not including specific requirements in the permits or third-party agreements. A Variety of Enforcement Actions Are Available to Corps Districts When the Corps determines that required compensatory mitigation has not been performed, the type of enforcement action taken would depend on, among other things, whether mitigation is to be carried out by the permittee or by a third party. In cases where the permittee was to perform the mitigation, the Corps may issue a compliance order, assess administrative penalties, require the permittee to forfeit a bond, suspend or revoke a permit, and/or refer the case to the Department of Justice for legal action. Under section 404 of the Clean Water Act and Corps regulations, the Corps may take the following actions: Issue compliance orders to permittees who violate any condition of their permits. Each order must specify the nature of the violation, which could include failure to implement mitigation requirements, and specify a time by which the permittee must come into compliance. Assess administrative penalties, in an amount of up to $27,500. Require the permittee to forfeit a bond, if such a bond was a condition of the permit. The Corps has the authority to require permittees to post a financial bond to assure that they will fulfill all obligations required by the permit, which could include compensatory mitigation. Suspend a permit for, among other things, a permittee’s failure to comply with the terms and conditions of the permit. A suspension requires the permittee to stop the activities previously authorized by the suspended permit. Following the suspension, the Corps may take action to reinstate, modify, or revoke the permit. Refer the case to the Department of Justice to bring an action in federal district court seeking an injunction and civil penalties. Cases that are appropriate for judicial actions include violations that are willful, repeated, flagrant, or of substantial impact. Civil penalties may be awarded by the court in an amount of up to $25,000 per day for each violation. The enforcement actions available to the Corps for a mitigation bank or in- lieu-fee sponsor's failure to carry out mitigation would depend on the provisions that are incorporated into each permit (if applicable) and mitigation bank agreement or in-lieu-fee agreement and would be governed by the terms of the agreement with the Corps. For example, once the Corps has agreed that a permittee’s mitigation requirements will be satisfied by a mitigation bank or in-lieu-fee arrangement, the permittee satisfies these mitigation requirements by submitting the required payment to the third- party sponsor. Federal guidance for mitigation banks states, “it is extremely important that an enforceable mechanism be adopted establishing the responsibility of the bank sponsor to develop and operate the bank properly.” The guidance states that the bank sponsor is responsible for securing sufficient funds or other financial assurances in the form of, among other things, performance bonds, irrevocable trusts, escrow accounts, and letters of credit. In addition, “the banking agreement should stipulate the general procedures for identifying and implementing remedial measures at a bank.” Similarly, federal guidance states that an in-lieu-fee agreement should contain, among other things, “financial, technical and legal provisions for remedial actions and responsibilities (e.g., contingency fund)”; “financial, technical and legal provisions for long-term management and maintenance (e.g., trust)”; and a “provision that clearly states that the legal responsibility for ensuring mitigation terms are fully satisfied rests with the organization accepting the fee.” Corps Districts Rely Primarily on Negotiation While the Corps may take a variety of enforcement actions, the seven districts did not take any enforcement actions in fiscal year 2003, the latest year for which data is available. Instead, district officials primarily rely on negotiation, a first step in the Corps’ enforcement process, to resolve noncompliance issues. In keeping with Corps regulations, district officials told us that, when they find that required compensatory mitigation has not been performed, they first notify the responsible party and gather relevant information to better understand the noncompliance case. They then attempt to negotiate by discussing with the permittees or third parties available corrective actions and time frames for voluntarily bringing the work into compliance. For example, at one district, officials told us that corrective actions by responsible parties could include working with an environmental organization such as The Nature Conservancy to improve wetlands or developing a traveling exhibit for local schools to educate children about the value of protecting wetlands. According to Corps officials, no additional action is needed generally because responsible parties are willing to work with the Corps to get back into compliance. If district officials do not succeed in voluntarily bringing the responsible party into compliance, they notify the responsible party in writing, laying out potential enforcement actions available to the Corps and time frames for the party to respond to the letter—the next step toward achieving compliance. District officials told us they generally resort to such actions to achieve compliance only after negotiation has failed because such actions usually take more time to implement. For example, one district official estimated that when the Corps refers a noncompliance case to the Department of Justice, district officials may be occupied for several months. Similarly, according to Corps officials, developers prefer to negotiate with the Corps because it is less time-consuming than pursuing legal solutions. In addition, use of enforcement actions does not always ensure that the required compensatory mitigation will be completed. For instance, Corps district officials told us, while monetary penalties are an effective tool that draws attention to compliance and enforcement, the funds collected from assessing these penalties are required by law to go into the general fund of the federal Treasury. Corps Sometimes Limits Its Own Enforcement Ability On occasion, district officials wanting to pursue enforcement actions after detecting instances of noncompliance may not be able to do so because they have limited their enforcement capabilities; that is, they have not specified the requirements for compensatory mitigation in permits and failed to establish agreements with third parties. In our file review, we identified several permits that lacked this crucial information about required mitigation. Both the Chief of the Regulatory Branch and district officials stress the importance of including specific mitigation information in permits so that the Corps can take actions necessary to ensure required compensatory mitigation occurs. However, some of the districts we visited acknowledged that the lack of enforceable conditions included within a permit has been a problem and they have efforts under way, such as permit reviews and standardized permit conditions, to ensure that future permits are issued with the conditions needed to ensure enforceability. Although a review process for permit conditions may be a good idea, we found that, even when a review process was in place in some of the districts we visited, they still had issued permits with unenforceable conditions. In addition, we found that three districts had not established formal agreements with third parties to document the objectives and implementation of mitigation banks or in-lieu-fee arrangements, as called for in federal guidance. Of the 85 mitigation bank files we reviewed, 21 did not have agreements with the Corps. These mitigation banks were all located in Minnesota, one of two states with mitigation banks that fall under the jurisdiction of the St. Paul District Office. According to district officials, Minnesota had developed state mitigation banking guidelines before the federal guidelines. Many of the banks in Minnesota were approved by the state program and partially developed before requesting Corps approval. Corps officials told us they had decided not to take additional steps to develop agreements with these mitigation banks. Currently, district officials issue a letter approving all or a portion of the state bank for use in the Corps compensatory mitigation program but do not develop a banking agreement with the bank sponsor. At the time of our review, district officials realized that the lack of mitigation banking agreements limited their enforcement ability and, therefore, were developing banking guidelines to provide more structure for the establishment of mitigation banks in Minnesota. However, they had not yet begun to consistently develop such agreements. For the in-lieu-fee arrangements we reviewed, the Galveston and New Orleans districts have not established formal agreements with in-lieu-fee sponsors. Without such agreements, district officials may not know how many permittees are using these arrangements to fulfill their compensatory mitigation requirements. For example, for the arrangements that he was responsible for monitoring, a Galveston district official could not provide us with information about the number of permittees using the arrangement to perform compensatory mitigation, the total amount of payments the in- lieu-fee sponsor had received, or any oversight activities conducted by the Corps to ensure that the sponsor was performing the required compensatory mitigation. Before our visit, Galveston district officials were unaware that their four in-lieu-fee arrangements were not in compliance with federal guidance and are now attempting to restructure these arrangements. In addition, a Galveston district official told us the district will develop such agreements with the sponsors of future arrangements. With regard to the in-lieu-fee arrangement in New Orleans that did not have an agreement, officials told us that resource constraints and other priorities had prevented them from establishing a formal agreement with the in-lieu-fee sponsor. This arrangement has collected approximately $1 million since its inception in 1994, but district officials could provide no other information regarding oversight of the arrangement. Until the districts establish formal agreements with third-party sponsors, the Corps does not have sufficient legal recourse if third parties do not perform required compensatory mitigation because the sponsors have not reached agreement with the Corps on what penalties and/or corrective actions will be required to address any problems if the mitigation efforts are not performed. The Corps’ Chief of the Regulatory Branch noted that he would encourage the districts to cease using these in-lieu-fee arrangements to provide compensatory mitigation until such agreements are established. Conclusions The Corps’ section 404 program is crucial to the nation’s efforts to protect wetlands and achieve the national goal of no net loss. Although Corps officials acknowledge that compensatory mitigation is a key component of this program, the Corps has consistently neglected to ensure that the mitigation it has required as a condition of obtaining a permit has been completed. The Corps’ priority has been and continues to be processing permit applications. In 1988 and 1993, we reported that the Corps was placing little emphasis on its compliance efforts, including compensatory mitigation, and little has changed. The Corps continues to provide limited oversight of compensatory mitigation, largely relying on the good faith of permittees to comply with compensatory mitigation requirements. The Corps’ oversight efforts have been further hampered by vague and inconsistent guidance that does not (1) define key terms, (2) specify the actions Corps staff should take if required monitoring reports are not received, or (3) set clear expectations for oversight of compensatory mitigation. Furthermore, district officials have failed to establish agreements with third-party sponsors that would ensure the agency has legal recourse if compensatory mitigation is not performed. Until the Corps takes its oversight responsibilities more seriously, it will not know if thousands of acres of compensatory mitigation have been performed and will be unable to ensure that the section 404 program is contributing to the national goal of no net loss of wetlands. Recommendations for Executive Action Given the importance of compensatory mitigation to the section 404 program and its contribution to achieving the national goal of no net loss of wetlands, we recommend that the Secretary of the Army direct the Corps of Engineers to establish an oversight approach that ensures required mitigation is being performed throughout the nation. As part of this oversight approach, the Corps should develop more specific guidance for overseeing compensatory mitigation performed by permittees, mitigation banks, and in-lieu-fee sponsors; in particular, the guidance should define key terms such as “substantial mitigation” and specify the actions Corps officials should take if required monitoring reports are not received; clarify expectations for oversight of mitigation, including establishing goals for the number of monitoring reports that should be reviewed and the number of compliance inspections that should be conducted; and review existing mitigation banks and in-lieu-fee arrangements to ensure that the sponsor has an approved agreement with the Corps, as called for in federal guidance; if such agreements are not in place, they should be developed and the Corps should ensure that future mitigation banks and in-lieu-fee arrangements have these approved agreements. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of the Department of Defense for review and comment. The Department of Defense concurred with the report’s findings and recommendations. In its written comments, the Department of Defense stated that the Corps is currently revising its standard operating procedures. According to the department, the revised guidance will provide details on mitigation requirements as well as compliance and enforcement procedures. The department also indicated that the Corps will issue a Regulatory Guidance Letter that will clarify monitoring requirements for compensatory mitigation and include an outline for standardized monitoring reports. In addition, the Department of Defense provided technical comments and clarifications that we incorporated, as appropriate. The Department of Defense’s written comments are presented in appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and Members of Congress; the Secretary of Defense; the Secretary of the U.S. Army; and the Chief of Engineers and Commander, U.S. Army Corps of Engineers. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Scope and Methodology Our review focused on the compensatory mitigation activities at 7 of the U.S. Army Corps of Engineers’ (Corps) 38 districts that implement the section 404 program: Charleston, South Carolina; Galveston, Texas; Jacksonville, Florida; New Orleans, Louisiana; St. Paul, Minnesota; Seattle, Washington; and Wilmington, North Carolina. We selected these districts because they represent different geographic areas of the United States, and they comprised over two-thirds of the compensatory mitigation required by individual permits issued in fiscal year 2003. The Charleston, Galveston, Jacksonville, New Orleans, St. Paul, and Wilmington districts were the top districts nationwide in terms of mitigation required by individual permits. While the Seattle district is not one of the top 7 districts nationwide, it is one of the top districts in the western region in terms of required individual permit mitigation, and we included it to provide geographic coverage. To determine how much compensatory mitigation was required by permits issued by each of the 38 districts, we used the Quarterly Permit Data System data, which we examined and determined to be suitably reliable for selecting the districts to be included in our review. To identify the guidance the Corps has established for overseeing compensatory mitigation, we examined legislation, federal guidance on mitigation banks and in-lieu-fee arrangements, Corps regulations, Corps guidance, and supplemental guidance developed by the districts. We also met with responsible Corps headquarters and district officials to discuss the Corps’ guidance on oversight of compensatory mitigation. To determine the extent to which the Corps oversees compensatory mitigation, we reviewed a total of 249 files. We reviewed 152 permit files issued in fiscal year 2000 where the permittee was responsible for the mitigation. We selected this time frame because sufficient time would have passed for the permittee to begin work on the permitted project, as most of the permits we reviewed were valid for 5 years or less, and for the Corps to have received a monitoring report or conducted a compliance inspection. We also reviewed files for 95 mitigation banks (including 10 mitigation banks approved in 2004), and 12 in-lieu-fee arrangements. The mitigation banks we reviewed had been approved since the mitigation banking guidance was established on November 28, 1995. For in-lieu-fee arrangements, we reviewed the arrangements currently operating at the seven districts at the time of our site visit. These mitigation bank and in- lieu-fee arrangement files usually provided data on the mitigation activities for multiple permittees, and the mitigation conducted can encompass thousands of acres. Owing to the large number of permits and mitigation banks at some of the districts, we selected a random sample of permit files at Jacksonville and of mitigation banks at the Jacksonville, New Orleans, and St. Paul districts. These samples were drawn so that the estimates from the samples would have a precision margin of about plus or minus 15 percentage points at the 95 percent confidence level. However, we decided not to project our estimates to the population of permits in the seven districts because some districts were unable to find information for our sampled units, and another district was unable to provide a list of permits within the scope of our sample. Since we had no information on the missing permits, we are only presenting estimates for the permits and banks that we reviewed. While our results are not representative of the activities of the 38 district offices nationwide, the Corps’ Chief of the Regulatory Branch told us that our findings would likely indicate program implementation at the other districts not included in the scope of our review. Tables 1 through 3 detail the permit files, mitigation banks, and in-lieu-fee arrangements reviewed at each of the districts. As listed in table 1, we reviewed all permits that met our criteria (individual permits where the permittee was responsible for performing mitigation issued in fiscal year 2000) with the following exceptions: Charleston. The district could not locate three permit files. Jacksonville. We selected a random sample of 55 the 167 individual permits identified by Jacksonville officials. The district could not locate complete permit files for 13 of the permits we requested. In addition, 18 of the permits we requested did not meet our criteria; for example, some permits were modified in fiscal year 2000 but were not issued in that year. New Orleans. District officials could not identify the permits that met our criteria from the district database. Therefore, we asked district officials to select the permits issued in fiscal year 2000 where the permittee was responsible for performing compensatory mitigation and reviewed all of the permits they identified. St. Paul. The district could not locate one permit file. In addition to the file reviews, we spoke with district officials and reviewed relevant documentation to gain a better understanding of the districts’ oversight programs and to gather any information that may not have been available during our file reviews. To identify the enforcement actions the Corps can take if it determines that compensatory mitigation requirements are not being met and the extent to which it takes these actions, we analyzed Corps data on how the district offices resolved instances of noncompliance during fiscal year 2003. In addition, we reviewed relevant regulations and documentation obtained either from Corps officials or identified during our file reviews. We also discussed with headquarters and district officials the enforcement actions available to the Corps and the frequency with which the districts used these actions. In addition, we met with several sponsors of mitigation banks and in-lieu- fee arrangements, as well as subject area experts, such as members of the National Research Council, to gain their views on the Corps’ mitigation program. We conducted our review from June 2004 through September 2005 in accordance with generally accepted government auditing standards. Corps of Engineers Federal Guidance for Oversight of Compensatory Mitigation As noted earlier, the Corps has three primary guidance documents for overseeing compensatory mitigation performed by permittees, mitigation banks, and in-lieu-fee arrangements: (1) The 1999 Army Corps of Engineers Standard Operating Procedures for the Regulatory Program (Parts I and II); (2) The Federal Guidance for the Establishment, Use and Operation of Mitigation Banks; and (3) The Federal Guidance on the Use of In-Lieu-Fee Arrangements for Compensatory Mitigation Under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act. These documents provide guidance for overseeing compensatory mitigation as described in this appendix. Standard Operating Procedures for the Regulatory Program (Parts I and II) The Corps’ 1999 Standard Operating Procedures for the Regulatory Program (Part I) highlights critical policies and procedures that are major factors in administering a consistent program nationwide. It specifies the following: For all compensatory mitigation, Compliance inspections are essential. Districts will inspect a relatively high percentage of compensatory mitigation to ensure compliance with permit conditions. This is important because many of the Corps permit decisions require compensatory mitigation to offset project impacts. To minimize field visits and the associated expenditures of resources, permits with compensatory mitigation requirements should require applicants to provide periodic monitoring reports and certify that the mitigation is in accordance with permit conditions. Districts should review all monitoring reports. Districts will require all permittees to submit a self-certification statement of compliance. Districts should not be expending funds on surveillance as a discrete activity. Surveillance should be performed in conjunction with other activities such as permit or enforcement actions. For mitigation banks, Districts will inspect all mitigation banks to ensure compliance with the banking agreement. These are not mentioned in Part I of the standard operating procedures. The Corps’ Standard Operating Procedures for the Regulatory Program (Part II) lists the work that should be prioritized. Part II states that it is not intended to dissuade districts from doing lower priority work; however, all districts should perform the high priority work before expending resources on the lower priority work. Part II specifies the following for mitigation: High priority work consists of: requiring and reviewing monitoring reports on mitigation banks and other substantial mitigation, including in-lieu-fee approaches to assure success; and Low priority work consists of: compliance inspections for all mitigation and multiple site visits to a mitigation site. Federal Guidance for the Establishment, Use and Operation of Mitigation Banks The Federal Guidance for the Establishment, Use and Operation of Mitigation Banks, issued in November 1995, provides policy guidance for the establishment, use, and operation of mitigation banks for the purpose of providing compensatory mitigation. Oversight guidance in this document is as follows: Members of the mitigation banking review team, which the Corps chairs, are encouraged to conduct regular (e.g., annual) on-site inspections, as appropriate, to monitor bank performance. Annual monitoring reports should be submitted to the authorizing agencies, which include the Corps. The period for monitoring will typically be 5 years; however, it may be necessary to extend this period for projects requiring more time to reach a stable condition or where remedial activities were undertaken. Federal Guidance on the Use of In-Lieu-Fee Arrangements for Compensatory Mitigation Under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act The Federal Guidance on the Use of In-Lieu-Fee Arrangements for Compensatory Mitigation Under Section 404 of the Clean Water Act and Section 10 of the Rivers and Harbors Act, issued in November 2000, clarifies the manner in which in-lieu-fee mitigation may serve as an effective and useful approach to satisfy compensatory mitigation requirements and meet the administration goal of no net loss of wetlands. Related to oversight, it specifies the following: There should be appropriate schedules for regular (e.g., annual) monitoring reports to document funds received, impacts permitted, how funds were disbursed, types of projects funded, and the success of projects conducted, among other aspects of the arrangement. The Corps should evaluate the reports and conduct regular reviews to ensure that the arrangement is operating effectively and is consistent with agency policy and the specific agreement. File Review Results by Corps District This appendix presents the results of our file review at seven Corps districts—Charleston, South Carolina; Galveston, Texas; Jacksonville, Florida; New Orleans, Louisiana; St. Paul, Minnesota; Seattle, Washington; and Wilmington, North Carolina. Results of our review for individual permits issued in fiscal year 2000 where permittees were responsible for the mitigation are presented in table 4. Results for mitigation banks approved between the date of the mitigation bank federal guidance (November 28, 1995) and December 31, 2003, are in table 5 and in-lieu-fee arrangements currently operating at the districts at the time of our site visits are in table 6. Comments from the Department of Defense GAO Contact and Staff Acknowledgments GAO Contact Anu K. Mittal (202) 512-3841 (mittala@gao.gov) Staff Acknowledgments In addition to the individual named above, Sherry McDonald, Assistant Director; Diane Caves; Jonathan Dent; Doreen Feldman; Janet Frisch; Natalie Herzog; Cynthia Norris; Karen O’Conor; Anne Rhodes-Kline; Jerry Sandau; Carol Herrnstadt Shulman; Lisa Vojta; and Daniel Wade Zeno made key contributions to this report.
Because wetlands provide valuable functions, the administration set a national goal of no net loss of wetlands in 1989. Section 404 of the Clean Water Act generally prohibits the discharge of dredged or fill material into waters of the United States, which include certain wetlands, without a permit from the U.S. Army Corps of Engineers (Corps). To help achieve the goal of no net loss, the Corps can require compensatory mitigation, such as restoring a former wetland, as a condition of a permit when the loss of wetlands is unavoidable. Permittees can perform the mitigation or pay a third party--a mitigation bank or an in-lieu-fee arrangement--to perform the mitigation. GAO was asked to review the (1) guidance the Corps has issued for overseeing compensatory mitigation, (2) extent to which the Corps oversees compensatory mitigation, and (3) enforcement actions the Corps can take if required mitigation is not performed and the extent to which it takes these actions. The Corps has developed guidance that establishes two primary oversight activities for compensatory mitigation: requiring the parties performing mitigation to periodically submit monitoring reports to the Corps and conducting compliance inspections of the mitigation. However, parts of the guidance are vague or internally inconsistent. For example, the guidance suggests that the Corps place a high priority on requiring and reviewing monitoring reports when "substantial mitigation" is required, but it does not define substantial mitigation. Furthermore, one section of the guidance directs district officials to conduct compliance inspections of a relatively high percentage of compensatory mitigation sites, while another section designates these inspections as a low priority, leading to confusion by Corps officials. Overall, the seven Corps districts GAO visited performed limited oversight to determine the status of compensatory mitigation. The Corps required monitoring reports for 89 of the 152 permit files reviewed where the permittee was required to perform compensatory mitigation. However, only 21 of these files contained evidence that the Corps received these reports. Moreover, only 15 percent of the 152 permit files contained evidence that the Corps had conducted a compliance inspection. The Corps districts provided somewhat more oversight for mitigation performed by the 85 mitigation banks and 12 in-lieu-fee arrangements that GAO reviewed. For the 60 mitigation banks that were required to submit monitoring reports, 70 percent of the files contained evidence that the Corps had received at least one monitoring report. However, only 36 percent of the mitigation bank files that GAO reviewed contained evidence that the Corps conducted an inspection. For the 6 in-lieu-fee arrangements that were required to submit monitoring reports to the Corps, 5 had submitted at least one report. In addition, the Corps had conducted inspections of 5 of the 12 arrangements. The Corps can take a variety of enforcement actions if required compensatory mitigation is not performed. These actions include issuing compliance orders, assessing administrative penalties of up to $27,500, requiring the permittee to forfeit a bond, suspending or revoking a permit, implementing the enforcement provisions of agreements with third parties, and recommending legal actions. District officials rarely use these actions and rely primarily on negotiation to resolve any violations. In some cases, GAO found district officials may not be able to use enforcement actions after detecting instances of noncompliance because they have limited their enforcement capabilities. For example, because they did not always specify the requirements of compensatory mitigation in the permits, they had no legal recourse for noncompliance.
Background EPAA Capabilities and Time Frames In September 2009, the President announced a revised approach to missile defense in Europe called EPAA, which consists of phases of increasing capability to be deployed in the 2011, 2015, and 2018 time frames. EPAA serves as the U.S. contribution to the North Atlantic Treaty Organization’s (NATO) mission to protect alliance populations, territory, and forces against ballistic missile threats. As originally announced, EPAA included a fourth phase that was expected to add U.S. homeland defense and expanded regional defense in the 2020 time frame. In March 2013, the Secretary of Defense canceled Phase Four, due, in part, to development delays with a key element of this phase. In 2011, DOD deployed BMD elements to meet the President’s announced time frame for the first phase of EPAA. This provided capability against short- and medium-range threats and included: Aegis BMD-capable ships with the Standard Missile-3 Block IA interceptor stationed in the Mediterranean; an Army Navy/Transportable Radar that is forward-based in Turkey; and a Command, Control, Battle Management and Communications system deployed to an Air Force base in Germany. DOD is in the process of preparing for the second phase of EPAA scheduled for implementation in December 2015. The second phase will include Aegis Ashore based in Romania to provide additional capability against short- and medium-range threats with a more advanced interceptor. The third phase of EPAA is scheduled for late 2018 and will include Aegis Ashore based in Poland to provide capability against medium- and intermediate-range threats. Additionally, although Patriot and Terminal High Altitude Area Defense (THAAD) batteries were not BMD elements originally announced as part of the revised approach to missile defense in Europe, DOD officials stated that both elements could deploy to support EPAA as needed, independent of the EPAA phases.Figure 1 summarizes DOD’s proposed time frames and BMD elements for the three phases of EPAA. Figure 2 graphically displays increasing U.S. BMD capabilities introduced in each EPAA phase. BMD and EPAA Roles and Responsibilities A number of stakeholders within DOD have roles and responsibilities in developing, building, deploying, and managing resources for BMD, including MDA, combatant commands, the services, and other organizations. MDA is responsible for the development, acquisition, and testing of BMD system elements in close collaboration with the warfighter The combatant commands mainly community and testing organizations.involved in EPAA implementation are U.S. Strategic Command and U.S. European Command. U.S. Strategic Command’s responsibilities include synchronizing planning for global missile defense in coordination with other combatant commands, the services, MDA, and appropriate agencies, while U.S. European Command has operational control over BMD elements located within its area of responsibility and collaborates with the services that would employ the BMD elements during combat. See appendix III for a summary of key stakeholders across DOD that are involved in the implementation of EPAA. Our Prior Work on BMD In previous reports on BMD, we have identified challenges associated with MDA’s BMD efforts and DOD’s broader approach to BMD planning, implementation, and oversight. In an April 2013 report, we found that MDA’s cost baselines were not useful for decision makers to gauge progress because they did not include operating and support costs from the military services and thus were not sufficiently comprehensive.Although MDA reports some operating and support costs in its annual accountability report, we have found that this report does not include services’ costs. DOD partially agreed with our recommendation to include in its resource baseline cost estimates all life-cycle costs including operating and support costs. Subsequently, as we found during this review, MDA is working with the services to jointly develop estimates of operating and support costs for two BMD elements. Further, we reported in 2011 that DOD had not developed a life-cycle cost estimate for BMD in Europe because the department considers EPAA an approach—not a program—that is flexible and would change over time. At that time, we recommended that DOD develop an EPAA life-cycle cost estimate which would allow the department to assess whether its plans were affordable. DOD responded that a more-effective approach would be to prepare element-specific cost estimates. In a January 2011 report, we reported that, though DOD initiated multiple simultaneous efforts to implement EPAA, it faced key management challenges that could result in inefficient planning and execution, limited oversight, and increased cost and performance risks. We also reported that DOD faced planning challenges because the BMD system’s desired performance was not defined using operationally relevant quantifiable metrics—such as how long and how well it can defend—that would provide the combatant commands with needed visibility into the operational capabilities and limitations of the BMD system they intended to employ. As noted earlier, DOD generally agreed with our recommendations to provide guidance on EPAA that describes desired end states, develop an integrated EPAA schedule, and adopt BMD performance metrics for durability and effectiveness but to date has not taken any action. In a September 2009 report, DOD generally agreed with our recommendations to perform a comprehensive analysis identifying its requirements for BMD elements and require the establishment of operational units before making elements available for use. In response, DOD completed an analysis of BMD requirements which, according to DOD officials, informed the Army’s process for fielding BMD elements with operational units. For additional GAO reports on BMD, see the Related GAO Products section at the end of this report. DOD Met EPAA Phase One Deployment Time Frame, but Its Warfighter Acceptance Process Does Not Fully Identify and Plan to Resolve Implementation Issues DOD met the presidentially announced time frame to deploy EPAA Phase One capabilities to Europe when DOD positioned EPAA elements in the region, and MDA declared EPAA Phase One architecture to be technically capable in December 2011. According to DOD officials, the BMD capabilities were in place and could have been used if needed. U.S. Strategic Command, through its warfighter operational readiness and acceptance process, used an established set of criteria to assess EPAA Phase One capabilities and formally accepted the EPAA Phase One architecture into the global BMD system in April 2012. However, DOD experienced implementation issues deploying BMD capabilities in Europe, such as incomplete construction of infrastructure, including housing and dining facilities, for soldiers arriving at the EPAA forward- based radar site and incomplete implementing arrangements defining how DOD would operate with allies when certain BMD elements arrived in the host country. DOD’s existing warfighter acceptance process does not explicitly require the combatant commands, the services, and MDA to comprehensively identify and develop a plan to resolve such issues before deploying BMD capabilities. Without taking steps to resolve implementation issues prior to deployment, DOD risks encountering similar challenges as it deploys additional BMD capabilities to Europe. DOD Used Its Warfighter Acceptance Process and Criteria to Assess EPAA Phase One Capabilities DOD’s warfighter acceptance process and criteria were used to accept EPAA Phase One capabilities. The manual guiding the process for warfighter acceptance of BMD capabilities indicates that the end state of acceptance is crew knowledge and doctrine, tactics, techniques, and procedures that reflect the reality of the fielded system or ensure that the warfighter can fight with and optimize MDA-delivered BMD capabilities. In essence, the goal of the warfighter acceptance process is to ensure that capabilities can be used as intended when they are delivered. This process—separate from but a companion to MDA’s process for technical capability declaration—informs MDA’s testing so that the warfighter understands the elements’ capabilities and limitations and can more effectively employ BMD capabilities. In addition, the U.S. Strategic Command, in coordination with other combatant commands, develops criteria to assist in the determination of whether to officially accept an element for operational use by the combatant commands. The criteria used during the warfighter acceptance process focuses primarily on areas such as effectiveness, suitability, and interoperability. For example, one of the acceptance criteria used to assess initial EPAA capabilities was the extent to which the forward-based radarcapable of searching for and tracking ballistic missile threats. By comparing these acceptance criteria against BMD test results, U.S. European Command and the services were able to better understand the capabilities, limitations, and risks of initial EPAA BMD elements and developed their plans, tactics, and procedures accordingly. and Aegis BMD ship were In addition to using acceptance criteria, U.S. European Command conducted a separate BMD exercise in Europe with servicemembers operating actual BMD elements to demonstrate the performance of initial EPAA capabilities within the region. Using the results, U.S. European Command and U.S. Strategic Command coordinated to identify technical improvements that could be made, and U.S. Strategic Command accepted the EPAA Phase One architecture into the global BMD system in April 2012. After acceptance, U.S. European Command also conducted a subsequent BMD exercise in May 2013 with U.S. and NATO servicemembers to demonstrate interoperability of initial EPAA capabilities with NATO BMD capabilities. DOD’s Warfighter Acceptance Process Did Not Fully Identify and Resolve Warfighter Implementation Issues before Deploying BMD Elements As discussed above, DOD used its warfighter acceptance process to assess BMD elements dedicated to Phase One of EPAA. However, though the goal of the warfighter acceptance process is, in essence, to ensure that capabilities can be used as intended when they are delivered, this process did not explicitly require the combatant commands, the services, and MDA to comprehensively identify and develop plans for resolving various implementation issues prior to deploying these and other supporting elements to Europe. As a result, DOD experienced three implementation issues related to deploying BMD capabilities to Europe. These included: (1) incomplete infrastructure, such as housing and dining facilities, for soldiers arriving at the forward-based radar site in Turkey; (2) lack of defined policies and procedures for sharing BMD radar data across geographic combatant commands; (3) and incomplete implementing arrangements and tactics, techniques, and procedures with allies. Incomplete facilities in Turkey: DOD deployed the forward-based radar to Turkey in December 2011 before completing construction of infrastructure, such as permanent housing, dining, and other facilities for soldiers arriving on the site. According to officials, construction could not be completed prior to deploying the forward-based radar due to compressed deadlines in order to meet the presidentially announced time frame. As a result, Army officials stated that soldiers arrived at the remote mountain-top radar site in winter conditions, and their tent-based expeditionary facilities—though climate controlled and equipped with latrines, showers, and other basic facilities—were initially unable to withstand the conditions. Also, at the time, roads leading to the nearest town were not well-maintained, which created safety challenges and made access to nearby services less efficient. The Army made some improvements after the 2011-2012 winter season, such as replacing the expeditionary facilities with those typically used in Alaska in order to better suit the wintery conditions, but construction of longer-term infrastructure will not begin until mid- 2014. Until the permanent facilities are completed, soldiers deployed to the site may continue to face difficult conditions. Further, without a process that accounts for implementation issues such as this, DOD may encounter similar challenges as it deploys additional capabilities to the region. Lack of defined policies and procedures for sharing BMD radar data across geographic combatant commands: Sharing BMD element data, such as radar data, can improve missile defense performance, but DOD accepted its most-recently deployed forward-based radar before finalizing policies and procedures that address potential overlapping operational priorities across geographic combatant commands. Subsequent to its deployment of a forward-based radar for EPAA in 2011, DOD deployed another forward-based radar in the operational area of U.S. Central Command in 2013. DOD had begun discussions on the benefits and drawbacks of sharing radar data, but the most-recent deployment proceeded without a decision for how to address these issues, even though both regions face a common threat. According to officials, the first priority for deploying each radar was to support separate missions in their respective areas of responsibility, and a decision to use one radar to support the other radar was a secondary priority and thus did not require resolution prior to deployment. However, officials also stated that sharing radar data between the recently deployed radar with the EPAA forward-based radar could benefit missile defense in Europe and potentially increase operational effectiveness across both geographic combatant commands. DOD guidance states that U.S. Strategic Command is responsible for synchronizing global missile defense planning in coordination with the combatant commands, services, MDA, and appropriate agencies. Guidance further indicates that U.S. Strategic Command, working with the geographic combatant commands, integrates and synchronizes various BMD elements, such as radars. However, the warfighter acceptance process did not explicitly require a comprehensive assessment of whether policies and procedures for sharing BMD radar data are defined. The combatant commands, including U.S. European Command, have made progress on addressing this implementation issue. For example, since deployment, U.S. European Command, in coordination with U.S. Strategic Command, has requested technical analysis from MDA in order to determine the extent to which the radars can share information. In addition to the technical analysis, U.S. European Command officials stated that DOD has held several senior-level meetings to discuss policies and procedures for addressing potential overlapping operational priorities and to discuss possible consequences that might occur if the radars are integrated. As a result of not completing such policies and procedures prior to accepting BMD capabilities, DOD continues to operate these radars separately and may face difficulty in sharing the radar data across geographic combatant commands, thus affecting efficient BMD operations in Europe. Incomplete implementing arrangements and procedures for working with allies: DOD’s experience delivering Patriot batteries to Turkey in early 2013 demonstrates some of the difficulties the warfighter could encounter by not finalizing implementing arrangements and tactics, techniques, and procedures with allies prior to deployment. DOD deployed Patriot batteries to Turkey as part of a NATO mission to support the country’s air defense, but this action was not part of EPAA’s first phase. However, U.S. European Command officials indicated that it shaped this deployment to be similar to future U.S. deployments of Patriot batteries to Europe, and interoperability with NATO is a key aspect of EPAA. However, according to Army officials, host-nation implementing arrangements had not been finalized before the Patriot batteries arrived in Turkey, resulting in the equipment remaining at an airfield for several weeks before it could be deployed for operations. In addition, according to Army officials, foreign disclosure issues were not resolved by the time Patriot batteries arrived in Turkey, and initially there were limitations on what intelligence information could be shared with non-U.S. forces. Further, according to Army officials, soldiers had to receive supplemental training to perform the NATO mission, including using NATO tactics, techniques, and procedures, which can differ from those of the United States. According to officials, DOD was aware of these issues but could not address them prior to deploying Patriot batteries to Turkey due to the need to address threats there. Further, officials stated they must also adhere to certain political and host-nation decisions that can affect their ability to address all implementation issues before deployment. Nonetheless, the warfighter acceptance process did not explicitly require a comprehensive assessment of whether these implementing arrangements and procedures were completed prior to deployment. By not completing implementing arrangements and procedures for how to work with allies before deployment, Army officials stated that they spent extensive time working with allies to resolve these implementation issues, which put a strain on Army’s limited existing resources. DOD’s Process for Accepting New BMD Capabilities Could Result in Future Implementation Challenges DOD recognizes that it has encountered previous implementation challenges related to deploying BMD capabilities to Europe and is taking steps to address them, but these efforts may not prevent future problems. According to U.S. European Command officials, one step they have taken is to establish a synchronization board that tracks EPAA implementation, but this board has focused more on Aegis Ashore than on potential Patriot or THAAD battery deployments. Additionally, the Navy, in coordination with MDA and U.S. European Command, is tracking the development and deployment of the Aegis Ashore weapon systems and facilities. However, these efforts are not part of DOD’s warfighter acceptance process, which means that issues raised through these efforts would not necessarily be addressed prior to accepting or deploying additional EPAA capabilities. Also, the acceptance criteria used to assess BMD elements in areas such as effectiveness, suitability, and interoperability do not include a detailed identification of potential implementation issues that may affect operational performance. Further, DOD officials said that they plan to use the existing acceptance process to accept and deploy future EPAA capabilities, but may not for other BMD elements that could support BMD operations in Europe, such as THAAD. In using the existing process, which does not explicitly require a comprehensive assessment of various implementation issues prior to deployment, DOD may deploy future BMD capabilities without identifying or developing a plan to resolve implementation issues, such as incomplete host-nation implementing arrangements for Aegis Ashore radar operations. One of the more-difficult challenges facing DOD is completing implementing arrangements for access to frequencies that Aegis Ashore is designed to use. We have previously reported on issues related to frequency access for Aegis Ashore. The two Aegis Ashore elements dedicated to EPAA Phases Two and Three—which are expected to operate in Romania and Poland by 2015 and 2018 respectively—have radars that DOD has designed to use a certain range of frequencies for full operations, including maintenance, periodic testing of equipment, and training of crews. However, according to U.S. European Command officials, some of the frequencies Aegis Ashore is designed to use are reserved for civil use, such as commercial and cell phone services. Accordingly, U.S. European Command officials stated that resolving frequency access issues and completing the implementing arrangements for U.S. radars takes time and must be initiated early in the planning process to allow time for completion before DOD deploys Aegis Ashore in Romania. According to U.S. European Command officials, in 2013, DOD and Romanian officials worked together to agree on frequencies available for Aegis Ashore operations so that both the radar and the commercial and cell phone services can coexist, with restrictions, by early 2015. In Poland, however, resolving frequency range access issues is more complex, according to DOD officials. Specifically, the frequency range is more congested in central Europe, which increases the potential for cross-border interference with neighboring countries. In addition, according to U.S. European Command officials, Poland is in the process of issuing new commercial licenses for frequencies within its civil frequency range that overlap with those Aegis Ashore is designed to use. This process may affect the time frame for resolving Aegis Ashore’s access to these frequencies. DOD officials stated that they plan to work closely with their Polish counterparts to resolve these issues prior to the planned deployment of Aegis Ashore in 2018. According to DOD officials, construction of Aegis Ashore can proceed without these issues being resolved. However, the extent to which the radar could be used to train, maintain, and test the capabilities may be limited. As a result, the current warfighter acceptance process, with its focus on meeting operational needs based on criteria that do not comprehensively include potential implementation issues, may not ensure that radar capabilities can be fully used once deployed. In addition, DOD may choose to forward station or deploy Patriot and THAAD batteries to supplement EPAA or NATO operations. U.S. Strategic Command officials stated that the warfighter acceptance process will not be applied to Patriot batteries, and they have not yet decided whether the process will be applied to THAAD batteries. Nonetheless, it is important that the warfighter be prepared to operate the batteries and that implementing arrangements be in place. As with the Aegis Ashore radar, if DOD forward-stationed a THAAD battery to Europe, it may need to negotiate implementing arrangements for the THAAD radar to access frequency ranges for periodic testing, maintenance, and training to support BMD operations. Also, if Patriot batteries were sent to Europe, DOD may need to negotiate implementing arrangements and coordinate tactics, techniques, and procedures with allies as it did for the Patriot deployment to Turkey. Since DOD’s experience has shown that it may require considerable time in order to develop necessary implementing arrangements, it would be important for these types of issues to be identified as soon as possible. Unless DOD comprehensively identifies and develops a plan to resolve implementation issues for elements that may deploy to support BMD operations in Europe, DOD risks experiencing challenges that may affect the warfighter’s ability to fully utilize the systems as designed. DOD has encountered various implementation issues when deploying BMD capabilities in Europe and risks encountering similar issues in the future, because there is no explicit requirement within the warfighter acceptance process to ensure that these types of issues are comprehensively identified before the capabilities are deployed. The current warfighter acceptance process does not produce an integrated, holistic identification of implementation issues and, as a result, DOD does not identify and develop a plan to resolve them before BMD capabilities are deployed. Instead, responsibilities are diffused across several organizations. For example, U.S. Strategic Command officials view their role as ensuring that EPAA capabilities function within the BMD system worldwide, which includes BMD elements that are not among those dedicated to EPAA. U.S. European Command is responsible for conducting BMD operations in its area of responsibility. The services operate individual BMD elements and provide the manpower and training necessary to do so. Although U.S. Strategic Command considers input from U.S. European Command and the services when defining acceptance criteria, the criteria used to-date do not fully assess the extent to which implementation issues may affect operational performance, for instance by limiting the available frequencies for radar use in a particular country or region. As a result, DOD will likely continue to face implementation issues unless a more holistic, integrated view is taken to identify and plan to resolve these issues before BMD capabilities are deployed in Europe, which may result in less-efficient BMD operations. DOD Lacks a Complete Understanding of the Long-Term Operating and Support Costs for BMD Elements in Europe DOD has estimated the long-term operating and support costs for some, but not all, BMD elements in Europe. Initial estimates indicate that these costs could total several billion dollars over the elements’ lifetime, but these estimates do not provide a complete picture of the likely costs. For example, key decisions that have not yet been made—such as what long- term support strategies to adopt and where to forward-station some BMD elements—are likely to change the estimates for THAAD and the forward- based radar. In addition, DOD has not developed a comprehensive, joint estimate of operating and support costs for the two planned Aegis Ashore sites. The lack of complete, long-term operating and support cost estimates for the BMD elements could hinder DOD’s ability to develop budgets and allocate resources for BMD operations in Europe. Initial Operating and Support Cost Estimates for THAAD and the Forward-Based Radar Are Likely to Change DOD developed initial estimates of operating and support costs for THAAD and the forward-based radar—both of which are ultimately to be managed by the Army—but these estimates are likely to change as these programs mature and DOD completes business-case analyses and makes key decisions, such as what their long-term support strategies will be and where to forward-station these elements. The Army and MDA have signed a memorandum of agreement and several annexes since 2009 outlining how the two organizations are to manage responsibilities for BMD elements, which includes jointly estimating operating and support costs. In addition, the element-specific annexes direct the development of business-case analyses as part of determining the long- term support strategy for these elements. Further, Army guidance, which is referenced in the annexes, similarly directs the use of business- case analyses as part of selecting the product-support strategy. In January 2012, the Army and MDA estimated that the EPAA forward- based radar would cost $61 million in fiscal year 2014 and $1.2 billion in then-year dollars over its 20-year life. However, this estimate assumes continued contractor support throughout the life of the forward-based radar. Even though forward-based radars have been deployed since 2006, DOD has not yet completed a business-case analysis as part of determining the long-term support as described in an Army regulation and in the forward-based radar annex, which is to include an assessment of alternatives to contractor-provided support over the lifetime of this element. In addition, the Army has made changes to reduce operating and support costs for the forward-based radar, but these changes are not reflected in the $1.2 billion lifetime cost estimate previously cited. Army officials stated that the Army and MDA met in November 2013 to begin developing the business-case analysis for the radar, which they intend to complete in fiscal year 2015. However, the annex does not include an explicit requirement that this analysis be completed by a specific time. Also, MDA and Army officials said that completion of this analysis to inform a decision on a long-term support strategy will, in turn, provide information for updating the operating and support cost estimates for the forward-based radar. In December 2012, the Army and MDA estimated operating and support costs for six THAAD batteries for 20 years, totaling $6.5 billion in then- year dollars.throughout the life of THAAD. Even though the first two THAAD batteries have been available since early 2012, DOD has not yet completed a business-case analysis as part of determining the long-term support strategy, as provided for in the annex, which is to include an assessment of alternatives to contractor-provided support over the lifetime of THAAD. Specifically, MDA conducted an initial THAAD business-case analysis, which it provided to the Army for comment. The Army did not agree with the analysis because it was not done in accordance with Army regulations. As the Army and MDA work through these disagreements, the THAAD business-case analysis remains incomplete as of December 2013, and there is no firm deadline to complete the analysis. Completion of this analysis to inform a decision on a long-term support strategy will, in turn, provide information for updating the operating and support cost estimates for the THAAD. This estimate also assumes continued contractor support In addition, the estimate of operating and support costs for THAAD assumed that all six batteries would be located in the United States. However, DOD officials stated that they are examining options for forward-stationing some THAAD batteries overseas. Doing so would likely increase operating and support costs due to higher operational tempo, contractors that are deployed with the system, additional needed security, life-support facilities such as barracks and a mess hall, and site preparation for the equipment. For example, MDA recently estimated that operating and support costs for one THAAD battery in Guam could be $11 million higher annually than if the battery was located in the continental United States. However, this estimate does not include costs for military personnel, fuel, site activation, transportation, or some contractor costs. Further, costs could be even higher if an element is located at an austere location due to additional costs for site preparation, security, transportation, and some contractor costs. DOD Has Not Developed a Comprehensive Joint Estimate of Operating and Support Costs for Aegis Ashore MDA and the Navy have not developed a comprehensive, joint estimate of the operating and support costs for the two European Aegis Ashore sites over their expected 25-year life span, and it is unclear when such an estimate will be completed. The Navy and MDA completed an annex to a memorandum of agreement in August 2012 describing how they are to jointly manage Aegis Ashore, which notes that the two organizations will collaborate on cost estimating and budget planning. Under the annex, MDA responsibilities include providing funding for construction of certain mission-essential facilities and the operations and support of aspects of the Aegis weapon system through fiscal year 2017. The Navy responsibilities include providing funding for construction and operations and sustainment of housing and quality-of-life facilities, as well as the training facility, which is located in the United States. The Navy will be responsible for all Aegis Ashore operating and support costs at the two planned sites beginning in fiscal year 2018. Although the Navy and MDA have agreed to jointly develop cost estimates, and officials from the Navy and MDA have stated these estimates will focus on operating and support costs, their August 2012 memorandum of agreement does not include a clear deadline for first completing a joint cost estimate. This estimate would enable MDA and the Navy to more-accurately budget for their respective share of the costs. Although MDA and the Navy have not developed a comprehensive joint estimate, they have individually begun to identify some costs. Specifically, the Navy has estimated $155 million will be required for manning, operating, and supporting the base facilities from fiscal year 2014 through fiscal year 2018. MDA has reported in its 2013 Ballistic Missile Defense System Accountability Report that operating and support costs for the Aegis Ashore test facility and the two European sites may total $82 million through fiscal year 2018, but this does not include operating and support costs for the entire expected 25-year life. In addition, MDA officials stated that their estimate does not include costs for base facilities, military personnel, or other Navy costs and, therefore, cautioned against combining both Navy and MDA’s individual estimates in order to approximate total Aegis Ashore operating and support costs. By fiscal year 2018, the Navy will assume responsibility for all operating and support costs for the Aegis Ashore sites in Romania and Poland. However, without a comprehensive, joint estimate of the lifetime operating and support costs for the two Aegis Ashore sites that is updated as key program decisions are made, it will be difficult for the Navy to develop accurate budgets for operating and supporting this element of EPAA. More-Comprehensive Cost Estimates Can Aid Budget Development We and the Office of Management and Budget have reported that cost estimates are important to support budget development. Specifically, cost estimates can assist decision makers in budget development and are necessary for evaluating resource requirements at key decision points and effectively allocating resources. In addition, Office of Management and Budget guidance containing principles for capital asset acquisitions emphasizes that government agencies should understand all costs in advance of proposing acquisitions in the budget, and notes that agencies should plan for operations and maintenance of capital assets. Further, it is important to fully identify operating and support costs since these costs can be up to 70 percent of a weapon system’s lifetime costs. Major defense acquisition programs within DOD generally follow an acquisition process that includes steps in which cost estimates are developed, including operating and support costs. Due to the acquisition flexibilities MDA has been granted, application of this process has been deferred and MDA follows a separate process for development and acquisition. Nonetheless, DOD has not required completed operating and support cost estimates prior to introducing BMD capabilities in Europe. In addition, existing memorandums of agreement and related annexes between MDA and the services, while they require the completion of business-case analyses for the forward-based radar and THAAD, do not clearly require that these analyses be completed in a timely manner to support a decision on long-term support strategies before introducing capabilities. Similarly, these memorandums of agreement also do not clearly require developing estimates in a timely manner, such as before capabilities are introduced, or updating those estimates to support budget development after long-term support strategies or other key program decisions—such as whether to forward-station certain elements overseas—are made. The lack of an estimate and subsequent updates could limit decision makers’ ability to identify the resources that will be needed over the long term to support the planned investment in the system’s capabilities. Conclusions DOD has made a substantial investment in BMD, and its initial deployment of capabilities for EPAA proceeded in line with the President’s announced timelines. However, the rapid fielding of EPAA has resulted in challenges that, unless DOD takes action, are likely to continue as DOD implements additional capabilities. By not fully identifying and planning to resolve implementation issues in its acceptance process to-date, U.S. Strategic Command, U.S. European Command, and the services have had to rush to secure and emplace the resources needed to support the capabilities it has already deployed. Without identifying the resources, implementing arrangements, infrastructure, and other items that need to be in place before deploying additional EPAA capabilities, DOD may continue to face challenges in operating BMD elements as it moves forward with the future phases of EPAA. In addition, if DOD does not also take action to identify and plan to resolve these types of implementation issues for all current and future BMD capabilities that could support BMD operations in Europe, DOD is likely to experience additional implementation challenges. Similarly, the department’s commitment to EPAA implementation has proceeded without a full understanding of the related long-term operating and support costs, thereby lessening assurance of the approach’s sustainability through all phases. Although the services and MDA have begun to estimate operating and support costs, there are no firm deadlines for completing and revising estimates as the programs mature and key decisions are made, such as completing business-case analyses to support decisions on long-term support strategies or where the BMD capabilities may be forward-stationed. Making such decisions and updating the estimates accordingly would enable the services and MDA to more-accurately develop budgets for their respective share of the costs. Further, the lack of a comprehensive, joint estimate of operating and support costs for Aegis Ashore can make it difficult for the Navy and MDA to develop budgets to cover these costs. Without completed and updated estimates for the long-term operating and support costs of BMD elements in Europe, the department and congressional decision makers may not be fully aware of the resources that will be needed over time to support DOD’s commitment of providing BMD capabilities to Europe. Recommendations for Executive Action To improve DOD’s ability to identify and resolve implementation issues and to improve budgeting for long-term operating and support costs of BMD elements in Europe, we recommend that the Secretary of Defense take the following four actions. To ensure that BMD capabilities can be used as intended when they are delivered, in coordination with the Chairman of the Joint Chiefs of Staff, direct U.S. Strategic Command to identify and develop a plan to resolve implementation issues prior to deploying and operating future BMD capabilities in Europe. U.S. Strategic Command should work in consultation with U.S. European Command and the services to resolve implementation issues such as infrastructure, resolving policies and procedures to address potential overlapping operational priorities if radars are integrated across geographic combatant commands, completing host-nation implementing arrangements, and any other key implementation issues. To identify resources needed to support its plans for providing BMD capabilities in Europe and to support budget development, direct the Under Secretary of Defense for Acquisition, Technology and Logistics to require and set a deadline for the following three actions: completing a business-case analysis for the forward-based radar to support a decision on the long-term support strategy, and updating the joint MDA and Army estimate for long-term operating and support costs after a decision on the support strategy is made; completing a business-case analysis for THAAD to support a decision on the long-term support strategy, and updating the joint MDA and Army long-term operating and support cost estimate after this and other key program decisions, such as where the THAAD batteries are likely to be forward-stationed, are made; and completing a joint MDA and Navy estimate of the long-term operating and support costs for the Aegis Ashore two sites, and updating the estimates after key program decisions are made. Agency Comments and Our Evaluation We provided a draft of this report to DOD and the Department of State for review and comment. DOD provided written comments, which are reproduced in appendix IV, and the Department of State did not provide written comments on the report. In its comments, DOD partially agreed with one recommendation and agreed with three other recommendations. Also, DOD completed a security review of this report and determined that its contents were unclassified and contained no sensitive information. DOD and the Department of State provided technical comments, which we incorporated as appropriate. DOD partially agreed with our recommendation that U.S. Strategic Command, in consultation with U.S. European Command and the services, identify and develop a plan to resolve implementation issues prior to deploying and operating future BMD capabilities in Europe. In its comments, DOD stated that U.S. Strategic Command does not have the authority or mission to resolve implementation issues, but the services and MDA will work to identify and resolve implementation issues for future BMD capabilities in Europe. DOD further stated that U.S. Strategic Command will also work in consultation with U.S. European Command and the services to resolve integrated air and missile defense requirements and warfighter acceptance criteria, validate element performance and system integration, and advise cross global combatant command capability optimization/sharing as part of its global missile defense role. We understand that U.S. Strategic Command may not have the authority to directly resolve all implementation issues. However, it does have a role in integrating capabilities across combatant commands, as we discuss in this report. In addition, our recommendation does not state that U.S. Strategic Command should resolve all implementation issues prior to deploying capabilities, but rather that it identify and develop a plan to resolve implementation issues prior to deployment and to do so in consultation with U.S. European Command and the services. As we note in the report, the acceptance criteria used to-date focuses on effectiveness, suitability, and interoperability; however, the manual describing the acceptance process indicates that prerequisites for credibly assessing operational suitability include assessing whether such things as organization, training, or facilities are defined and in place for BMD elements. While it may be appropriate for U.S. European Command and/or the services to take the lead in resolving some implementation issues, such as ensuring proper infrastructure is in place, U.S. Strategic Command, in its advocacy and integration roles, can help in identifying and planning to resolve some issues, such as advising cross-combatant command capability sharing. Further, U.S. Strategic Command’s warfighter acceptance process is the only existing high-level forum where all key BMD stakeholders come together to assess operational utility of BMD elements. Therefore, we believe that U.S. Strategic Command, in conjunction with U.S. European Command and the services, can use its position as the warfighter advocate to elevate implementation issues, such as cross-combatant command capability sharing and system integration, to ensure that such issues are identified and that a plan to resolve them is developed. DOD agreed with our recommendation to require and set a deadline for completing a business-case analysis for the forward-based radar to support a decision on the long-term support strategy, and updating the joint MDA and Army estimate for long-term operating and support costs after a decision on the support strategy is made. DOD stated that the business-case analysis will be delivered in late fiscal year 2015 and that the joint cost estimate is updated biennially. The department further stated that if the business-case analysis results substantially change the underlying assumptions of the joint cost estimate, an out-of-cycle joint cost estimate would be conducted. Establishing a target date for completing the business-case analysis is a positive first step, and we believe that DOD needs to be vigilant to ensure that the late fiscal year 2015 date is met in order to be fully responsive to the intent of our recommendation. Doing so will enable DOD to update operating and support cost estimates, which, in turn, can improve budget development. DOD agreed with our recommendation to require and set a deadline for completing a business-case analysis for THAAD to support a decision on the long-term support strategy, and update the joint MDA and Army estimate for long-term operating and support costs after this and other key program decisions, such as where the THAAD batteries are likely to be forward-stationed, are made. DOD stated that THAAD is a “surge support” asset for EPAA with no specifically assigned area of responsibility, battery quantities, or locations. DOD further stated that MDA and the Army will support the decision to deploy THAAD assets and any related business-case analysis for projected sites. According to an Army official, conducting a business-case analysis to assess a weapon system’s lifetime support strategy and making stationing decisions are two separate, independent decisions although both affect operating and support costs. In other words, a business-case analysis can be completed and a support strategy decided upon without a decision on where the weapon system may be located. The purpose of a business-case analysis is to identify the optimum support concept at the lowest life-cycle cost, and DOD had previously planned to complete a business-case analysis for THAAD by late 2011. We recognized in this report that THAAD could deploy to support EPAA as needed and that options are being examined for forward-stationing some THAAD batteries overseas. We also noted that operating and support costs can account for up to 70 percent of a weapon system’s lifetime costs and that these costs are generally higher when a system is stationed overseas. Given that decision makers need to understand and therefore adequately budget for THAAD operating and support costs, we believe it is important for DOD to set a deadline for completing the business-case analysis to support a decision on the long- term support strategy and update the joint estimate of lifetime operating and support costs accordingly. DOD should also update the cost estimate after other key decisions are made, such as where THAAD may be located. Completing these actions would meet the intent of our recommendation. DOD agreed with our recommendation to complete a joint estimate of the long-term operating and support costs for the two Aegis Ashore sites and update the estimates after key program decisions are made. However, DOD did not set a deadline for completing the estimate, such as before introducing these capabilities in Europe—in late fiscal year 2015 and 2018—as we also recommended. We noted in the report that the operating and support costs will likely be significant and that the Navy will be responsible for all Aegis Ashore operating and support costs at the two planned sites beginning in fiscal year 2018. The lack of a joint estimate of the long-term operating and support costs will make it difficult for the Navy to accurately budget for these costs and can limit decision makers’ ability to identify the resources that will be needed over the long term to support DOD’s planned investment in Aegis Ashore. Therefore, we believe that DOD should set a deadline for completing this estimate in order to meet the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Commanders of the U.S. Strategic Command and U.S. European Command, the Secretaries of the Army and Navy, the Director of the Missile Defense Agency, and the Secretary of State. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1816 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Scope and Methodology During our review of the Department of Defense’s (DOD) implementation of the European Phased Adaptive Approach (EPAA), we examined relevant documentation and met with representatives from numerous agencies and offices. To assess the extent to which DOD has identified and planned to resolve implementation issues before deploying ballistic missile defense (BMD) capabilities to Europe, we reviewed the U.S. Strategic Command document titled Ballistic Missile Defense System (BMDS) Warfighter Capability Acceptance. This document describes the goal of the warfighter acceptance process, which is, in essence, to ensure that capabilities can be used as intended when they are delivered, and culminates in formal acceptance of BMD capabilities by U.S. Strategic Command. We also reviewed key documents, such as the Chairman of the Joint Chiefs of Staff Instruction 3295.01, Policy Guidance for Ballistic Missile Defense Operations, and the Joint Staff Publication 3-01, Countering Air and Missile Threats, which describe DOD’s BMD guidance and responsibilities of various organizations, and U.S. Strategic Command’s June 2013 Instruction 538-03 on Integrated Air and Missile Defense (IAMD) Warfighter Involvement Process (WIP). We also met with officials from the Office of the Secretary of Defense, the Joint Staff, U.S. European Command and its service components, and U.S. Strategic Command to understand how DOD’s process was implemented. In addition, we reviewed U.S. European Command planning documents, briefings on EPAA implementation and results of BMD exercises, and minutes from synchronization board meetings to identify implementation issues and assess the extent to which these issues are related to DOD’s acceptance process. We also reviewed Navy instructions and documents from the Navy Ballistic Missile Defense Enterprise and U.S. Naval Forces Europe to understand how the Navy monitors and addresses technical and implementation issues related to Aegis Ashore for EPAA Phases Two and Three. We reviewed 10th Army Air and Missile Defense Command and 32nd Army Air and Missile Defense Command reports and briefings that described implementation challenges experienced during the deployment of BMD elements to Europe and other regions, and provided an assessment of lessons learned for future BMD element deployments. We also reviewed documents and briefings from the U.S. Air Forces Europe 603rd Air Operations Center to understand whether implementation issues—such as U.S.–NATO command and control relationships—are identified and channeled through U.S. European Command as a part of DOD’s capability acceptance process. We spoke to senior-level officials from the Army, Navy, Air Force, U.S. Strategic Command, U.S. European Command, U.S. Army Europe, U.S. Navy Europe, U.S. Air Forces Europe, Joint Staff, the Office of the Secretary of Defense, and the Missile Defense Agency (MDA) about their participation in the acceptance process, including the selection of acceptance criteria to assess EPAA Phase One BMD elements, identification and resolution of implementation issues prior to accepting EPAA BMD elements, and any planned adjustments to the existing process. Finally, we spoke to senior-level State Department officials to understand their role leading up to the deployment of EPAA Phase One capabilities and overall involvement in subsequent EPAA implementation efforts. We also spoke to senior-level NATO officials to get their perspectives on possible implementation issues related to command and control relationships during NATO-led BMD operations and interoperability among U.S., NATO, and member-nation BMD systems. To assess the extent to which DOD has estimated the long-term costs to operate and support BMD elements in Europe, we first reviewed agreements and their annexes between MDA and the Army and between MDA and the Navy regarding how these organizations are to work together to manage the BMD elements, including information on how they are to jointly develop cost estimates. We identified and reviewed documents containing best practices for determining high-quality cost estimates from the Office of Management and Budget and the GAO Cost Estimating and Assessment Guide, which indicate that estimating long- term operations and support costs assists in budget development and the allocation of resources. In addition, we reviewed the Army’s regulation on Integrated Logistic Support, which includes guidance on business-case analysis and is referenced in the agreement annexes between MDA and the Army to identify DOD criteria for conducting business-case analyses to assess alternatives for providing long-term support. We then reviewed documentation of estimates developed by MDA and the services for the BMD elements that are part of EPAA or could be deployed to support EPAA, which include Aegis Ashore, forward-based Army Navy/Transportable Radar, Terminal High Altitude Area Defense (THAAD), Command, Control, Battle Management and Communications, Patriot, and Aegis BMD-capable ships. We focused our assessment on the first three elements, because the services and MDA are sharing the operating and support costs for these elements. We assessed the documentation of the Army and MDA December 2012 joint estimate of operating and support costs for THAAD and the January 2012 joint estimate of operating and support costs for the forward-based Army Navy/Transportable Radar. We interviewed Army and MDA officials to understand the key assumptions underpinning each estimate. Further, we examined the key issues that could affect these estimates including DOD proposals for locating THAAD units overseas and the lack of business-case analyses for supporting a decision on the long-term support strategy for each element, which are called for by the BMD element agreements between the Army and MDA and by Army guidance referenced in those agreements. For Aegis Ashore, we confirmed with MDA and Navy officials that the two organizations had not yet jointly developed a comprehensive, long-term estimate. We did, however, assess Navy and MDA documentation of some Aegis Ashore costs that each organization expects to fund over the next 5 years. We did not evaluate the quality of the estimates in this review since we reported in 2011 that six of MDA’s life-cycle cost estimates did not meet the characteristics of a high-quality cost estimate. Since our objective for the current review was to assess the extent to which DOD had identified the operating and support costs of BMD elements, documenting the existence or absence of estimates was sufficient for our purposes. We conducted this performance audit from December 2012 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: U.S. Ballistic Missile Defense (BMD) Capabilities Available by European Phased Adaptive Approach (EPAA) Phase Appendix III: Key Department of Defense (DOD) Stakeholders Involved in Planning and Implementing the European Phased Adaptive Approach Primary Role in European Phased Adaptive Approach (EPAA) Provides acquisition policy direction, program guidance, and overall management oversight of the Missile Defense Agency. Chairs the Missile Defense Executive Board, provides program guidance, and makes recommendations to the Deputy Secretary of Defense on missile defense issues. A senior-level body that reviews DOD’s ballistic missile defense efforts and provides the Under Secretary of Defense for Acquisition, Technology and Logistics or Deputy Secretary of Defense, as necessary, with a recommended ballistic missile defense strategic program plan and feasible funding strategy for approval. The geographic combatant command whose area of responsibility includes all of Europe (including Russia and Turkey), Greenland, Israel, and surrounding waters. It is the primary geographic combatant command involved in planning for and implementing EPAA. It is assisted in this effort by its service components—principally U.S. Naval Forces Europe, U.S. Army Europe, and U.S. Air Forces Europe. The geographic combatant command whose area of responsibility includes parts of the Middle East. Coordinates with U.S. European Command to defend against ballistic missile threats originating from its area of responsibility. Functional combatant commandcapabilities that cross the boundaries of the geographic combatant commands, such as synchronizing planning and coordinating operations support for global missile defense, as well as missile defense advocacy for the combatant commands. with responsibilities to integrate global missions and Responsible for providing forces and resources to support fielding of the ballistic missile defense elements and assisting in planning for and managing the operations and maintenance and infrastructure needs of ballistic missile defense elements. Responsible for the research, development, testing, and acquisition of the integrated ballistic missile defense system, comprised of individual ballistic missile defense elements. In addition, the Missile Defense Agency is responsible for operating and support costs for some ballistic missile defense elements until this responsibility is undertaken by a military service. Principal staff assistant and advisor to the Secretary of Defense on operational test and evaluation in DOD. Responsibilities include issuing policy and procedures; reviewing and analyzing results of operational test and evaluation conducted for certain acquisition programs; and other related activities. In the context of the ballistic missile defense system, the director is responsible for conducting effective, independent oversight of operational testing and providing timely assessments to support programmatic decisions and reporting requirements. Plans and directs independent operational tests and evaluations and provides operational assessments of ballistic missile defense system capability to defend the United States, its deployed forces, friends, and allies against ballistic missiles of all ranges and in all phases of flight. The agency includes representation from service and joint operational test entities. A service component command is a command consisting of the service component commander and all those service forces, such as individuals, units, detachments, organizations, and installations under the command, including the support forces that have been assigned to a combatant command. The three functional combatant commands are U.S. Special Operations Command, U.S. Strategic Command, and U.S. Transportation Command. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Patricia W. Lentini, Assistant Director; Marie A. Mak, Assistant Director; Brenda M. Waterfield; Jennifer S. Spence; Laurie Choi; Virginia A. Chanley; Michael Shaughnessy; Erik Wilkins-McKee; and Amie Steele made key contributions to this report. Related GAO Products Missile Defense: Opportunity to Refocus on Strengthening Acquisition Management. GAO-13-432. Washington, D.C.: April 26, 2013. Missile Defense: Opportunity Exists to Strengthen Acquisitions by Reducing Concurrency. GAO-12-486. Washington, D.C.: April 20, 2012. Ballistic Missile Defense: Actions Needed to Improve Training Integration and Increase Transparency of Training Resources. GAO-11-625. Washington, D.C.: July 18, 2011. Missile Defense: Actions Needed to Improve Transparency and Accountability. GAO-11-372. Washington, D.C.: March 24, 2011. Ballistic Missile Defense: DOD Needs to Address Planning and Implementation Challenges for Future Capabilities in Europe. GAO-11-220. Washington, D.C.: January 26, 2011. Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization, Transparency, and Accountability Challenges. GAO-11-179R. Washington, D.C.: December 21, 2010. Defense Acquisitions: Missile Defense Program Instability Affects Reliability of Earned Value Management Data. GAO-10-676. Washington, D.C.: July 14, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Defense Acquisitions: Missile Defense Transition Provides Opportunity to Strengthen Acquisition Approach. GAO-10-311. Washington, D.C.: February 25, 2010. Missile Defense: DOD Needs to More Fully Assess Requirements and Establish Operational Units before Fielding New Capabilities. GAO-09-856. Washington, D.C.: September 16, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Defense Management: Key Challenges Should be Addressed When Considering Changes to Missile Defense Agency’s Roles and Missions. GAO-09-466T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: March 13, 2009. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Ballistic Missile Defense: Actions Needed to Improve the Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Defense Acquisitions: Missile Defense Agency’s Flexibility Reduces Transparency of Program Cost. GAO-07-799T. Washington, D.C.: April 30, 2007. Missile Defense: Actions Needed to Improve Information for Supporting Future Key Decisions for Boost and Ascent Phase Elements. GAO-07-430. Washington, D.C.: April 17, 2007. Defense Acquisitions: Missile Defense Needs a Better Balance between Flexibility and Accountability. GAO-07-727T. Washington, D.C.: April 11, 2007. Defense Acquisitions: Missile Defense Acquisition Strategy Generates Results but Delivers Less at a Higher Cost. GAO-07-387. Washington, D.C.: March 15, 2007. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goals. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000.
Since 2002, DOD has spent over $98 billion developing a ballistic missile defense system to protect the United States, U.S. forces, and allies against inbound threat missiles. In December 2011, DOD deployed the initial phase of a revised approach for Europe, with increased capabilities to be deployed in later phases. GAO has reported on potential risks to DOD's implementation caused by the lack of a coordinated management approach and an absence of life-cycle cost estimates. Given DOD's BMD investment and revised approach, GAO was asked to review EPAA's implementation. GAO evaluated the extent to which DOD (1) identified and planned to resolve implementation issues before deploying BMD capabilities to Europe; and (2) estimated the long-term costs to operate and support BMD elements in Europe. GAO reviewed DOD instructions, manuals, and other documents on the acceptance process and the status of operating and support cost estimates that have been developed to-date, and interviewed cognizant officials. The Department of Defense (DOD) met the presidentially announced time frame to deploy initial ballistic missile defense (BMD) capabilities in Europe under the European Phased Adaptive Approach (EPAA) but did not fully identify and plan to resolve implementation issues before deployment. As a result, DOD experienced implementation issues, such as incomplete construction of housing facilities for soldiers arriving at the EPAA radar site in Turkey and incomplete implementing arrangements defining how to operate with allies when certain BMD elements arrived in the host country. U.S. Strategic Command, in coordination with other combatant commands, developed criteria to assess whether a BMD capability is ready for operational use to ensure that BMD capabilities can be used as intended when they are delivered. However, the assessment criteria used during this process focused on effectiveness, suitability, and interoperability areas—such as whether BMD elements can work together to track ballistic missile threats—and did not explicitly require DOD to comprehensively identify and plan to resolve implementation issues prior to deploying these capabilities. DOD plans to continue to use its existing process to accept BMD capabilities planned for Europe in the future. Without identifying and planning to resolve implementation issues before deployment, DOD risks continuing to encounter implementation issues after it deploys additional BMD capabilities in Europe, which may lead to significant delays and inefficiencies. DOD has estimated the long-term operating and support cost estimates for some but not all BMD elements in Europe, and existing estimates could change. Specifically, initial estimates indicate these costs could total several billion dollars over the elements' lifetime, but key decisions that have not been made are likely to change these estimates. Also, DOD has not developed a comprehensive estimate for a key element—Aegis Ashore. In prior work developing cost-estimating best practices, GAO concluded that cost estimates can assist decision makers in budget development and are necessary for evaluating resource requirements at key decision points and effectively allocating resources. Office of Management and Budget guidance also emphasizes that agencies should plan for operations and maintenance of capital assets. In 2012, the Army and the Missile Defense Agency (MDA) estimated the lifetime operating and support costs for two BMD elements, a forward-based radar and terminal high-altitude air defense batteries. However, DOD has not completed business-case analyses for them, which would underpin a decision on long-term support strategies, and has not decided where to station the terminal-defense battery. In addition, MDA and the Navy have separately begun to identify some costs but have not developed a comprehensive joint estimate of lifetime operating and support costs for the two planned Aegis Ashore sites. Although MDA and the services agreed to jointly develop estimates of lifetime operating and support costs, there is no explicit requirement to complete business-case analyses to support a decision on long-term product support, and jointly develop cost estimates, before deploying BMD elements in Europe. However, without completed business-case analyses and up-to-date operating and support cost estimates, DOD and decision makers are limited in their ability to develop sound budgets and identify the resources needed over the long term to operate and support BMD elements in Europe.
Background This section provides information on OPA requirements, expenditures for oil pollution research conducted by interagency committee member agencies, and certain other organizations that conduct or coordinate research. The Interagency Committee on Oil Pollution Research Through OPA Congress established the interagency committee to coordinate a comprehensive oil pollution research program among federal agencies and in cooperation with industry, universities, research institutions, state governments, and other nations, as appropriate. It also designated member agencies, authorized the President to designate other federal agencies, and directed that a representative of the Coast Guard chair the interagency committee. The chairman’s duties include reporting biennially to Congress on the interagency committee’s member agencies’ activities related to oil pollution research. As also directed by OPA, the interagency committee was to develop a research plan that: identified member agencies’ roles and responsibilities; assessed the current status of knowledge on oil pollution prevention, response and mitigation technologies, and effects of oil pollution on the environment; identified significant oil pollution research gaps, including an assessment of major technological deficiencies in responses to past oil discharges; established research priorities and goals for oil pollution technology development related to prevention, response, mitigation, and environmental effects; estimated the resources needed for federal agencies to conduct the oil pollution research and development program and timetables for completing research tasks; and identified, in consultation with the states, regional oil pollution research needs and priorities for a coordinated, multidisciplinary program of research at the regional level. OPA also directed the chair of the interagency committee to contract with the National Academy of Sciences to (1) provide advice and guidance in the preparation and development of the research plan and (2) assess the adequacy of the plan as submitted and submit a report to Congress on the conclusions of that assessment. The interagency committee prepared the original research plan and, in 1992, submitted it to Congress and the National Research Council—created under the auspices of the National Academy of Sciences and through which the academy provides most of its advice—for their review and comment. The second edition of the research plan was submitted to Congress on April 1, 1997. Interagency Committee Member Agencies’ Expenditures for Oil Pollution Research According to agency officials, since fiscal year 2000, member agencies have spent about $163 million on oil pollution research. Of this total, approximately $145 million came from the Oil Spill Liability Trust Fund authorized by OPA. The largest source of revenue for the trust fund has been a tax collected from the oil industry on petroleum produced in or imported into the United States. The tax, which was $0.05 per barrel when OPA was enacted, expired in 1994 but was reinstated in 2005 and increased to $0.08 per barrel in 2008. Member agencies spent an additional $18 million on oil pollution. Table 1 shows the sources of funding for oil pollution research among seven interagency committee member agencies who reported that they conducted oil pollution research: the Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE); the Coast Guard; the Environmental Protection Agency (EPA); the National Aeronautics and Space Administration (NASA); the U.S. Navy; the National Oceanic and Atmospheric Administration (NOAA); and the Pipeline and Hazardous Materials Safety Administration (PHMSA). Other Organizations that Conduct or Coordinate Oil Pollution Research After the 1989 Exxon Valdez spill in Prince William Sound, Alaska, at least four states created or expanded their own oil pollution research programs and Congress created an oil pollution research institute. Alaska Division of Spill Prevention and Response. This division was established in 1991, although an official from the Alaska Division of Spill Prevention and Response told us that the state has had an oil pollution control program, which included research, since the 1970s. According to the agency’s Web site, Alaska appropriated a total of $2.5 million in the wake of the Exxon Valdez oil spill to enhance the ability of the state and industry to respond to oil spills. The funds were to be used for research programs directed toward the prevention, containment, cleanup, and amelioration of oil spills in Alaska. To date, more than 30 research and development projects have been completed. California Office of Spill Prevention and Response. This office was created in 1990 and has a variety of responsibilities related to spill prevention and response, including oil spill contingency planning. The office’s research program operated from 2004 through 2010 and supported a total of 38 research projects with a budget of $430,000 annually during this 6-year period. Louisiana Applied and Educational Oil Spill Research and Development Program. Louisiana’s program was established after the Exxon Valdez oil spill. The state created the Louisiana Oil Spill Coordinator’s Office, which, with Louisiana State University, formed the Oil Spill Research and Development Program. The program’s mission was to provide the state of Louisiana with tools related to oil spill prevention, detection, response, and cleanup. According to a program official, from 1993 through 2007, the program provided more than $500,000 per year to public colleges and universities to support a range of research. Texas General Land Office Oil Spill Prevention and Response Program. According to a state official, the Texas General Land Office’s Oil Spill Prevention and Response Program has spent $1.25 million per year for oil spill research since 1991. Its research is funded by a fee on oil loaded or unloaded in Texas. Oil Spill Recovery Institute (OSRI). OPA established OSRI for research, education, and demonstration projects to respond to and understand the effects of oil spills in the Arctic and sub-Arctic marine environments, amongst other purposes. OSRI is administered through and housed at the Prince William Sound Science Center, a nonprofit research and education organization in Cordova, Alaska. Funding for OSRI comes from interest on $22.5 million in the Oil Spill Liability Trust Fund. OSRI received more than $1 million from the fund in 2009 and $225,000 in 2010 and expects to receive between $560,000 and $1.3 million in 2011, according to an agency official. In addition, the National Response Team (NRT) coordinates some oil pollution research. NRT is an interagency organization responsible for, among other things, coordinating emergency preparedness and response to oil and hazardous substance pollution incidents. EPA and the Coast Guard serve as its Chair and Vice Chair, respectively. One of NRT’s responsibilities is to monitor “response related research and development, testing and evaluation activities of NRT agencies to enhance coordination, avoid duplication of effort and facilitate research in support of response activities.” Every 2 years NRT’s science and technology committee—which includes, among others, BOEMRE, the Coast Guard, EPA, and NOAA— provides the interagency committee with the information for its biennial reports to Congress. The science and technology committee also meets monthly and member agencies coordinate regularly on oil pollution research projects. These meetings allow agencies to leverage each other’s resources to achieve mutually beneficial oil pollution research, according to agency officials. Federal Agencies Have Conducted Oil Pollution Research, but with a Limited Coordination Role by the Interagency Committee According to our analysis of interagency committee reports, federal agencies have conducted at least 144 research projects on oil pollution prevention and response since 2003, but the interagency committee had a limited role in facilitating the coordination of agency efforts. The interagency committee established a joint research plan in 1997 that identified oil pollution risks and research priorities, but it has not updated that plan in light of changes in the oil production and transportation sector. The interagency committee also submitted biennial reports to Congress, as directed, but it has not evaluated member agencies’ progress in addressing research gaps identified in the 1997 research plan; until recently, it also had not revisited the plan, as the National Research Council recommended. Furthermore, since completing the 1997 research plan, the interagency committee has taken limited action, until recently, to foster communication and coordinate research among member agencies and to reach out to stakeholders, such as industry and state organizations. Federal Agencies Have Conducted at Least 144 Research Projects on Oil Pollution Prevention and Response since Completion of the Research Plan According to the interagency committee’s biennial reports, since 2003 member agencies have conducted at least 144 research projects related to preventing or responding to oil pollution. These projects have addressed a range of topics, such as responding to an oil spill by burning oil off the water’s surface (in situ burning), detecting oil in icy waters, predicting oil behavior in deepwater blowouts, and using micro-organisms to remove spilled oil in saltwater marshes. As table 2 shows, BOEMRE, the Coast Guard, EPA, and NOAA—4 of the 13 member agencies—accounted for all of the projects reported to Congress. Of the remaining nine member agencies, three agencies conducted research, but their research was not reported in the interagency committee’s biennial reports, and six agencies did not conduct any research. Projects conducted by these agencies and included in the interagency committee’s biennial reports addressed a wide range of topics. For example: BOEMRE: research to develop an aerial oil thickness and mapping system. Based on this research, initiated in 2005, BOEMRE developed a portable aerial sensor to detect and accurately map the thickness and distribution of oil slicks in coastal and offshore waters. The aerial thickness mapping system was deployed for the Deepwater Horizon oil spill and flown over the spill, providing maps of oil thickness. The Coast Guard used these maps to guide mechanical response efforts and dispersant operations and to plan in situ burns, according to Coast Guard officials. In addition, NOAA used this information to validate its model predictions for how the oil would behave in water, to document the potential for the oil to arrive on beaches, and to assess oil infiltration to the shoreline and marshes, according to NOAA officials. Coast Guard: recovery of oil on the sea floor. This project, which is ongoing is intended to develop methods to recover oil located on the bottom of the sea, according to Coast Guard officials. Its first objective is to develop a number of potential methods for detecting the oil and then selecting the most cost effective methods for further development. EPA: research into the biodegradability and toxicity of nonpetroleum oils. Through its ongoing research, EPA has found that the degree to which vegetable oils will biodegrade in the environment depends on a number of factors, including the oil’s chemical structure, according to EPA officials. Also, EPA found that vegetable oils can readily biodegrade anaerobically—or without oxygen—suggesting that a new treatment technology could be used for cleaning up a vegetable oil spill. This technology involves sinking the oil into the sediment by adding clay so that the oil rapidly biodegrades under anaerobic conditions with little adverse effects on the ecosystem. Currently, the National Contingency Plan provides that sinking agents may not be used as an oil recovery or mitigation measure, but as a result of this research, EPA is considering proposing an exception for treating vegetable oil spills. NOAA: research into monitoring the effectiveness of chemicals used to disperse oil. This research, completed in 2008, compared the behavior of oils with and without dispersants in different types of sediment from U.S. coastal waters, according to the interagency committee’s 2008–2009 biennial report. While these four agencies’ research projects were discussed in the interagency committee’s biennial reports, three other member agencies also conducted research that was not reported, according to our analysis of information that some agencies provided. In speaking with agency officials, however, we could not determine why the following agencies were omitted from the interagency committee’s biennial reports. PHMSA has administered an oil pollution research program since fiscal year 2002, but none of its projects have been included in the biennial reports. For example, PHMSA has an ongoing project to develop a model for commercial companies to predict the rate at which operating pipelines become weakened and suddenly fracture because of stress and corrosion, and in 2009, PHMSA completed a project examining the risk of plastic pipe failures, according to PHMSA documentation. The Navy and NASA have conducted some oil pollution research, but none of their research efforts were included in the biennial reports. For example, the Navy has an ongoing, multiphase project to evaluate the efficacy of equipment used to separate oil from wastewater before the wastewater is discharged from Navy ships. The Navy decided to research this issue because the chemical and physical properties of synthetic lubricants, some of which are denser than water, have posed problems for its oil-water separators, which operate based on the differences in specific gravity between oil and water, according to Navy documentation. Similarly, NASA recently provided funding to an oil pollution detection project through its Gulf of Mexico Initiative. The goal of the project, which is being conducted in partnership with the Naval Research Laboratory and NOAA, is to demonstrate practical applications for oil spill detection from observations of two NASA sensors in low-earth orbit. From these observations, NASA officials said that new methods will be developed for NOAA to use to detect oil spills. NASA officials said they selected this project because it would employ an innovative use of remote sensing technology, not because of its focus on detecting oil spills. Without knowing about these projects, Congress may be less informed when making funding decisions about oil pollution research. The Interagency Committee Coordinated Efforts to Develop the 1997 Research Plan, but until 2009, Took Limited Action to Foster Communication and Coordinate Research The interagency committee completed the research plan mandated by OPA to help guide member agencies’ research on oil pollution prevention and response in 1997. However, once the plan was completed, the interagency committee played a limited role in coordinating member agencies’ efforts. The Interagency Committee Developed the 1997 Research Plan through Joint Efforts but Has Not Addressed Some National Research Council Recommendations The interagency committee prepared a research plan required by OPA and submitted it for review to the National Research Council and Congress in 1992. The National Research Council provided its review of the first plan in 1994, and the interagency committee submitted the second edition of the plan to Congress on April 1, 1997. According to the interagency committee’s documentation, the committee conducted a 2-year voluntary interagency effort to address the National Research Council’s recommendations. The interagency committee’s 1997 research plan includes (1) an analysis of the oil production and transportation systems and associated oil pollution risks; (2) an identification of 21 research priorities intended to address oil pollution risks, categorized into three priority levels; (3) an identification of research areas of focus for some member agencies; and (4) an identification of some nonfederal stakeholders. While the interagency committee revised its research plan in order to address the National Research Council’s review, the committee did not fully address all of the council’s recommendations. For example, after reviewing the interagency committee’s first draft research plan, the National Research Council noted the interagency committee should, as part of its activities, comprehensively review and evaluate past and present oil pollution research to help guide federal research efforts and avoid duplication. The interagency committee followed this recommendation, in part, by capturing the results of some member agencies’ oil pollution research in its biennial reports to Congress, but it did not assess whether completed research contributed to advancing the 1997 research priorities; rather, the reports provided only summaries of research projects. Without such an assessment, Congress may be less able to provide oversight on the contributions of federal research to prevent and respond to oil spills. Furthermore, while some member agencies maintain Web sites that are accessible to the public and that contain data and reports on oil pollution research that has been conducted, the interagency committee has not assembled or published a comprehensive inventory of all research projects conducted by member agencies, which limits the interagency committee’s ability to evaluate past research. The interagency committee has recently taken steps to inventory member agencies’ research. Specifically, according to Coast Guard documents, in September 2010, the interagency committee chair began to inventory research projects and categorize them according to the 1997 plan’s research priorities. The interagency committee chair told us that this inventory is likely to help the interagency committee determine where to focus future research efforts in response to current and emerging risks. In addition, while OPA did not require the interagency committee to revise its research and technology plan, the National Research Council noted in its review that a comprehensive research plan should be continually reassessed. However, the interagency committee has not revised its 1997 research plan. As a result, the plan does not reflect significant changes in the oil production and transportation sectors or assess current and emerging risks or research priorities. Consequently, knowledge gaps in critical research areas may have been overlooked. For example: The 1997 plan contained 21 research priorities, such as oil spill surveillance and environmental restoration methods, and identified knowledge gaps in these areas, but it did not identify deepwater drilling as a specific research priority. However, by 2000, deepwater oil production had surpassed shallow water oil production, and within 5 years of the plan’s completion, oil production in deepwater had tripled, according to data from BOEMRE. The plan did not identify oil spills in icy waters as a risk, although oil production and shipping are expected to increase substantially in the Arctic, according to member agency officials. Coast Guard officials said that although the 1997 plan did not focus on oil spills in deepwater or the Arctic, many of the plan’s research priorities are still relevant for guiding current research. However, most officials from the 13 member agencies we spoke with told us that they either did not know that the interagency committee’s 1997 plan existed or did not use it to guide research; rather, each agency determined its own research priorities based on its mission. For example, EPA used a multiyear plan to guide all of its research, including oil pollution, but its plan did not reference the interagency committee’s 1997 research plan. Recognizing the need for a more active approach, the interagency committee chair told us that the committee began to consider updating the 1997 plan in late 2009 and planned to ask member agency officials to draft components of the revised plan during the summer of 2010. However, a number of member agencies were occupied with responding to the Deepwater Horizon incident, according to agency officials, and were thus unable to begin revising the plan. Coast Guard officials expect drafting of a revised research plan to begin during the summer 2011 and stated that it will take approximately 2 years to update the plan because the interagency committee intends to submit the plan to the National Research Council for its review. Coast Guard officials said that this effort to review and revise could take several years, as it did in the 1990s. Furthermore, according to Coast Guard officials, they have not yet decided whether the new research plan will include an evaluation of past research or address research priorities outlined in the 1997 plan. Interagency Committee Has Taken Limited Actions to Foster Communication and Coordination among Member Agencies and Nonfederal Stakeholders As directed by OPA, the interagency committee was to coordinate a comprehensive program of oil pollution research among the member agencies, in cooperation and coordination with industry, universities, research institutions, state governments, and other nations, as appropriate. The interagency committee has helped member agencies collaborate on some occasions. For example, according to an agency official who participates in the interagency committee, the committee played a role in facilitating interagency cooperation between BOEMRE and EPA. These agencies jointly conducted research, completed in 2006, in comparing how laboratory tests of the effectiveness of certain chemicals in dispersing oil in sea water compared with certain larger scale tests at a research facility. According to some member agency officials, however, the interagency committee had taken limited action to foster communication among member agencies between 1997 and 2009, when the interagency committee chair proposed updating the 1997 plan. Although the interagency committee’s meetings have occurred once or twice annually for the past 2 years, they occurred irregularly before then, according to some agency officials. Additionally, member agencies were not consistently represented in the interagency committee. Specifically, five agencies did not have a representative designated to the interagency committee until 2010. An official at one of these agencies told us that he was assigned as the representative to the interagency committee only after the agency had received our request to discuss the interagency committee’s work. Furthermore, officials at one agency said that they have never heard of the interagency committee and reported that the agency did not have a representative designated to the interagency committee. In October 2010, to better communicate with interagency committee member agencies, among others, the Coast Guard launched the interagency committee’s Web site, which includes transcripts from past public meetings and biennial reports to Congress. In addition, as directed by OPA, the interagency committee was to cooperate and coordinate with industry, universities, research institutions, state governments, and other nations, as appropriate. With specific regard to states, the interagency committee was to consult with them on regional oil pollution research needs and priorities. The National Research Council echoed these requirements in its recommendations, noting that such work was necessary in order to avoid duplication of research efforts and to enhance coordination and cooperation with those entities. In its 1997 research plan, the interagency committee identified the activities of some stakeholders, including the oil pollution research programs of four states and three industry groups, but interested stakeholders have reported limited contact with the interagency committee. For example: Officials from two of the four state oil pollution research programs we spoke with were unaware of the interagency committee’s existence until we contacted them. Officials from the other two state oil pollution research programs reported having past, albeit inconsistent, interaction with the interagency committee. The committee hosted three public meetings in 2010 to solicit input from nonfederal stakeholders on the direction of a new research plan; however, it announced the meetings only 4 weeks in advance, which may have been insufficient time to obtain participation from a range of stakeholders. An official we spoke with from a nonprofit oil pollution research organization had never interacted with the interagency committee until two of the conferences in 2010. By not communicating with key nonfederal stakeholders, the interagency committee may have missed opportunities to coordinate research efforts across sectors. For example, a state official we spoke with said that he is concerned that the interagency committee is not doing a sufficient job to minimize the duplication of research efforts across sectors; he noted that some of the federal and state research recently completed or currently underway is similar to federal and state research completed in the 1990s. Several state officials we spoke with also said that the interagency committee has generally not done a sufficient job of disseminating the results of completed federal research to nonfederal stakeholders, which could help nonfederal research organizations in planning their own research efforts. Furthermore, while the interagency committee’s last biennial report listed workshops or conferences interagency members attended, it did not report on any efforts to consult with key nonfederal stakeholders. In December 2010, Coast Guard officials told us that the interagency committee was considering establishing a subcommittee to coordinate with industry on planning and research, but they had not yet firmed up any plans to do so. Conclusions Like the Exxon Valdez spill in 1989, the Deepwater Horizon incident once again highlighted the need for new knowledge about oil spill prevention and response. The interagency committee completed a research plan required by OPA in 1997 to help guide member agencies’ research on oil pollution prevention and response. Federal agencies have conducted at least 144 research projects related to this issue, but the interagency committee, established to develop a comprehensive research and development program on oil spill prevention and response, has been incomplete in its accounting for research projects and has done little until recently to coordinate the federal research effort. The chair of the interagency committee has recognized the need for a proactive approach to coordination, and the committee’s recent effort to inventory member agencies’ research projects is a necessary step to understanding past research. However, this effort will be incomplete without an evaluation of whether this research addressed knowledge gaps identified in the 1997 plan. Without such an evaluation, Congress may be unable to provide effective oversight on the progress made in federal efforts to conduct research on oil pollution prevention and response. Furthermore, Coast Guard officials expect the drafting of a revised research plan to begin during summer 2011, but the revision of the plan has already been delayed because of the Deepwater Horizon incident, and the interagency committee could take several years to complete the planned revision, as it did in the 1990s with the 1997 research plan. Moreover, in the past, the interagency committee has not reached out effectively to identify and consult with key nonfederal stakeholders who could provide insight into the research that may need to be conducted, as it was directed to do by OPA. Without such outreach, the committee may be missing opportunities to advance knowledge across sectors and to avoid duplication of research efforts. Recommendations for Executive Action In order to better identify oil pollution risks, determine research priorities, and coordinate research efforts, we recommend that the Commandant of the U.S. Coast Guard direct the chair of the interagency committee to take the following three actions, in coordination with member agencies: Evaluate the contributions of past research to current knowledge on oil pollution prevention and response and report the results of these evaluations, including remaining gaps in knowledge, in its biennial reports to Congress. Provide a status update regarding the revision of the research plan, as well as a schedule for completing the revision, in the next biennial report due in 2012, which will cover 2010 and 2011. Establish a more systematic process to identify and consult with key nonfederal stakeholders on oil pollution risks and research needs on an ongoing basis. Agency Comments and Our Evaluation We provided the departments of Commerce, Defense, Energy, Homeland Security, the Interior, and Transportation; EPA; and NASA with a draft of this report for review and comment. In commenting on this report, the departments of the Interior and Transportation, and EPA provided technical comments, which we incorporated as appropriate. In addition, the Department of Homeland Security concurred with our recommendations and provided a formal response, which we reprinted in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Commerce, Defense, Energy, Homeland Security, the Interior, and Transportation; the Administrators of EPA and NASA; the Commandant of the U.S. Coast Guard; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To review the extent to which the Interagency Coordinating Committee on Oil Pollution Research (interagency committee) has facilitated the coordination of federal agencies’ oil pollution research efforts, we analyzed biennial reports produced by the interagency committee to assess efforts to identify and set priorities for research needs and reviewed our guidance on interagency collaboration. We interviewed cognizant agency officials on the extent of coordination among committee member agencies and, in September 2010, we attended a public meeting of the interagency committee to observe efforts to coordinate oil pollution research. We also interviewed external stakeholders, including officials from California, Louisiana, and Texas, and the Oil Spill Recovery Institute, a nonprofit research organization. We selected these organizations because all were listed in the interagency committee’s research plan as stakeholders. The findings from the officials we interviewed, however, cannot be generalized to other states or organizations. We also reviewed and analyzed interagency committee documentation to assess efforts to evaluate research projects and determine progress made toward completing research goals. We reviewed committee documentation and interviewed cognizant agency officials about any current and emerging oil pollution risks, as well as how they were identified. To determine the number of research projects conducted by member agencies, we reviewed the interagency committee’s biennial reports to Congress. While we intended to count the number of projects conducted since completion of the 1997 research plan, we could not count projects for fiscal years (1) 1997 and 1998 because the biennial report that includes those years did not include any research projects initiated after completion of the research plan; (2) 1999 and 2000 because the interagency committee was not required to report on its progress for those two years in accordance with the Federal Reports Elimination and Sunset Act of 1995, and did not do so; and (3) 2000, 2001, and 2002 because the interagency committee’s biennial reports included publications and not projects. Also, we could not confirm whether individual publications corresponded to a single project. Because of concerns about the availability and reliability of data, we were not able to identify all research projects completed during those years; however, we believe we captured the majority of the projects with our methodology because we were able to interview program officials from each member agency that conducted oil pollution research and confirm our approach and our list of projects with them. We conducted this performance audit from June 2010 to March 2011 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Christine Kehr, Assistant Director; David Bennett; Antoinette Capaccio; Nirmal Chaudhary; Scott Doubleday; Cindy Gilbert; Rich Johnson; Michael Kendix; Carol Herrnstadt Shulman; Vasiliki (Kiki) Theodoropoulos; and Jeremy Williams made key contributions to this report.
Congress passed the Oil Pollution Act in 1990 (OPA). Among other things, OPA established the Interagency Coordinating Committee on Oil Pollution Research (interagency committee) to coordinate an oil pollution research program among federal agencies, including developing a plan, having the National Academy of Sciences review that plan, and reporting to Congress on the interagency committee's efforts biennially. The 2010 Deepwater Horizon explosion and fire led to the largest oil spill in U.S. history, raising new concerns about the effects of oil spills. GAO was asked to assess the extent to which the interagency committee has facilitated the coordination of federal agencies' oil pollution research. (The Chairman, Subcommittee on Energy and Environment, House Committee on Science and Technology, now retired; and Representative Woolsey initiated this request.) In part, GAO analyzed committee documents and biennial reports and interviewed agency officials and nonfederal research entities. Federal agencies have conducted at least 144 research projects on oil pollution since 2003, but the interagency committee has played a limited role in coordinating this research, according to GAO's analysis of interagency committee reports and documents. For example, agencies conducted research on identifying the toxicity of nonpetroleum oils recovering oil from the sea floor. The interagency committee issued a research plan mandated by OPA in 1997 that set research priorities. This plan, however, did not fully address the recommendations on a draft plan made by the National Research Council, the organization through which the National Academy of Sciences provides most of its advice. For example, the National Research Council noted that the interagency committee should review and evaluate past and present oil pollution research to help guide federal efforts and to avoid duplication. The interagency committee has captured some member agencies' oil pollution research in its biennial reports to Congress, but it has not evaluated whether past research has advanced the 1997 research priorities; instead, the reports summarized projects. Without such an assessment, Congress may be less able to oversee the contributions of federal research to preventing and responding to oil spills. In addition, although OPA did not require that the interagency committee revise its 1997 plan, the National Research Council noted the need to continually reassess a comprehensive research plan. However, the interagency committee has not done so; consequently, the plan does not reflect changes in the oil production and transportation sectors since 1997, such as a significant increase in deepwater drilling. In September 2010, the interagency committee chair began to inventory completed research and categorize research projects according to the 1997 plan's research priorities, and the chair told GAO that the interagency committee will begin to update the 1997 plan in 2011. OPA also directed the interagency committee to coordinate a comprehensive research program of oil pollution research among the member agencies, in cooperation with external stakeholders, such as industry, research institutions, state governments, and universities. An interagency member official told GAO that the committee helped foster interagency cooperation between two agencies comparing two types of testing to determine the effectiveness of certain chemicals in dispersing oil in sea water; However, more generally, the interagency committee took limited action to foster communication among member agencies between 1997 and 2009, when the chair proposed updating the 1997 plan, according to some member agency officials. Although the interagency committee's meetings have occurred once or twice annually for the past 2 years, they occurred irregularly before then. Additionally, member agencies were not consistently represented in the interagency committee. In October 2010, to better communicate with interagency committee member agencies, among others, the interagency committee launched a Web site, which provides transcripts from its past public meetings and biennial reports to Congress.
Background In 1975, Congress amended the Voting Rights Act and extended its coverage to protect the voting rights of citizens of certain ethnic groups whose language is other than English. The act’s language minority provisions require states and covered jurisdictions—political subdivisions—that meet the act’s coverage criteria to conduct elections in the language of certain “minority language groups” in addition to English. The act defined these language minorities as persons of Spanish heritage, American Indians, Asian Americans, and Alaskan Natives. Where the applicable minority groups have a commonly used written language, the act requires covered jurisdictions to provide written election materials in the languages of the groups. For American Indians and Alaskan Natives whose languages are unwritten, only oral assistance and publicity, e.g., public information spots on the radio, are required. All covered jurisdictions must provide oral assistance when needed in the minority language. Both written and oral assistance must be available throughout the election process from registration to election day activities and are required for all federal, state, and local elections. According to the Civil Rights Division’s Voting Section, the objective of the act’s bilingual assistance provisions, in the Attorney General’s view, is to enable members of applicable language minority groups to participate effectively in the electoral process. Further, according to the Section, jurisdictions should take all reasonable steps to achieve the goal, but they are not required to provide bilingual assistance that would not further that goal. A jurisdiction need not, for example, provide bilingual assistance to all of its eligible voters if it effectively targets its bilingual program to those in actual need of bilingual assistance. The implementation of the act by states and jurisdictions could vary depending on the extent that the states provide assistance. For example, where states provide ballot translations for national and state issues and offices, the covered jurisdictions only have to translate the portions of ballot issues and offices that pertain to them. Where states provide no assistance, the responsibility for assistance falls entirely to the jurisdictions. The act, as amended, contains two sections—4(f)(4) and 203(c)—which provide specific criteria for determining which states and jurisdictions are to be covered by the bilingual voting provisions. The act designates the Attorney General or the Director of the Census to make these determinations (see app. III). In total, 422 jurisdictions in 28 states were covered during 1996. These included three states—Alaska (Alaskan Natives), Arizona (Spanish heritage), and Texas (Spanish heritage)—which were covered statewide (i.e., the act’s provisions apply to all political subdivisions within the state). Figure 1 illustrates the number of covered jurisdictions in each state. Some covered jurisdictions have more than one ethnic group for which they are required to provide minority language voting assistance. Figure 2 shows the number of minority language groups by ethnicity within the 422 covered jurisdictions. The Department of Justice’s Civil Rights Division is to oversee the covered states and jurisdictions’ implementation of the act. Where states and jurisdictions fail to comply with the provisions, the Department of Justice may bring civil action to attain compliance with the bilingual language provisions. Assistance Jurisdictions and States Reported Providing in 1996 Most jurisdictions that reported providing bilingual voting assistance in the 1996 general election said that they provided both written and oral assistance. As shown in figure 3, about 73 percent of the 292 jurisdictions responding reported that they provided both written and oral assistance. Seven percent reported that they did not provide bilingual voting assistance for the 1996 general election (see page 13). Moreover, five jurisdictions that reported providing assistance also reported providing assistance to other language minority groups that the act did not require them to assist. For 14 jurisdictions that reported providing oral assistance only, the act required 12 to provide assistance to American Indian groups. In addition, some jurisdictions that reported providing written and oral assistance actually provided assistance to more than one covered ethnic group, and depending on the group assisted, the type of assistance they provided may have varied. For example, Gila, AZ, reported providing written and oral assistance to Hispanics but only oral assistance to Apache Indians whose language is not written to the extent needed for election translation. Twenty-six of the 28 states surveyed responded. Of the responding states, 12 reported providing bilingual voting assistance. In addition, some states had passed their own legislation requiring some form of bilingual voting assistance (see page 15). Arizona, California, Connecticut, Hawaii, Massachusetts, New Mexico, and Texas reported providing both written and oral assistance. Florida, Michigan, New Jersey, and Rhode Island reported providing written assistance only. And, Alaska reported providing oral assistance only. Moreover, two states, California and Hawaii, reported providing assistance to groups that the act did not require them to assist. Written Assistance Reported by Jurisdictions and States As shown in figures 4 and 5, bilingual ballots were the most frequent type of written assistance reported by jurisdictions and bilingual voting instructions were the single most frequent written assistance reported by states. Of the 258 jurisdictions that reported providing written assistance, 231 reported providing bilingual ballots. Of the 11 states that reported providing written assistance, 7 reported providing bilingual voting instructions. However, among jurisdictions, the types of bilingual voting assistance they reported providing ranged from ballot assistance alone to all voting materials provided to voters. Appendix IV provides examples of translated voting instructions that were provided to some minority language voters and a portion of a bilingual ballot. Oral Assistance Jurisdictions and States Reported Almost all jurisdictions and states that provided minority language oral assistance did so by hiring bilingual poll and office workers or using the assistance of volunteers. Of the 227 jurisdictions that reported providing minority language oral assistance, 187 reported that they had hired bilingual workers and 35 reported that they used the assistance of volunteers. In addition, 13 jurisdictions reported providing minority language tapes describing the ballot and/or voting instructions. Of the eight states providing bilingual oral assistance, four employed bilingual workers and two hired interpreters to provide assistance. Figures 6 and 7 show the types of oral assistance provided by 227 jurisdictions and 8 states, respectively. Twenty-eight of the 227 responding jurisdictions reporting oral assistance provided this assistance to American Indian groups. In addition, the state of Alaska reported providing oral assistance to American Indian groups in six jurisdictions. Of these 34 jurisdictions, only 4 reported providing bilingual written materials as well as oral assistance to the American Indian groups. Some Covered Jurisdictions Reported Providing No Bilingual Assistance Although the jurisdictions that we surveyed were designated to provide bilingual voting assistance, 20 jurisdictions reported that they did not do so for the 1996 general election. They reported not providing assistance because they said that they (1) were unable to locate or identify individuals in their areas needing assistance (5 jurisdictions), (2) were not contacted by individuals in need of assistance or did not know of individuals needing assistance (13 jurisdictions), or (3) believed they had been exempted from providing assistance (2 jurisdictions). Of the 20 jurisdictions that reported not providing assistance, 17 were designated to provide it to American Indian groups and 3 were designated to provide it to Spanish heritage groups. Three of the jurisdictions designated to provide assistance to American Indian groups responded that they had contacted tribal officials to identify those in need of assistance but were told that no need existed. Another jurisdiction said that it had conducted a telephone survey of registered voters but was unable to find anyone in need of assistance. Further, 11 of the 20 jurisdictions indicated that should someone seek assistance, they had interpreters who were on call or could otherwise provide assistance. According to the Civil Rights Division’s Voting Section, one should interpret with care a jurisdiction’s response to the survey that it did not provide bilingual voting assistance. Most of the jurisdictions that indicated they had not provided bilingual voting assistance had relatively few members of the applicable language group, and the Attorney General’s minority language guidelines explain that the objective of the bilingual provisions is “to enable members of applicable language minority groups to participate effectively in the electoral process.” Accordingly, the Section said further inquiry would be needed to determine whether such a jurisdiction has violated the bilingual requirements of the act. Some Jurisdictions and States Reported Providing Assistance to Groups That Were Not Required to Be Covered Five jurisdictions and two states reported that in addition to providing assistance to minority language groups, as required under the act, they also furnished assistance to other groups. Table 1 identifies the jurisdictions and states that reported providing assistance to other groups and the groups that they assisted. Some States Have Adopted Their Own Bilingual Voting Assistance Requirements Several states have enacted laws requiring some form of minority language voting assistance during the election process. California, for example, requires that minority language sample ballots be posted in polling places in which the Secretary of State determines such assistance is needed. Also, when a need exists, county clerks are required to make reasonable efforts to recruit election officials fluent in minority languages. The state considers assistance to be needed when 3 percent or more of voting age citizens lack sufficient English skills to vote without assistance, or when citizens or organizations provide information supporting a need for assistance. New Jersey requires that bilingual sample ballots be provided for election districts where Spanish is the primary language for 10 percent or more of the registered voters. Also, two additional election district board members who are Hispanic in origin and fluent in Spanish must be appointed in these districts. In Texas, the election code specifies that bilingual election materials be provided in precincts where persons of Spanish origin or descent comprise 5 percent or more of the population of both the precinct and the county in which the precinct is located. In these covered precincts, the following materials must be presented bilingually: instruction cards, ballots, affidavits, other forms that voters are required to sign, and absentee voting materials. In addition, the judge presiding over an election in covered precincts must make reasonable efforts to appoint election clerks who are fluent in both English and Spanish. Also some states, such as North Dakota and Colorado, have laws that entitle non-English speaking electors to have assistance, e.g., for preparing ballots or operating voting machines, when they request it. Costs Jurisdictions and States Reported Incurring to Provide Bilingual Voting Assistance in 1996 In response to our survey questions on the cost of providing bilingual voting assistance, 34 jurisdictions said they reported all costs and 30 jurisdictions said they reported partial costs for 1996 elections. Likewise, two states reported all and five states reported partial bilingual voting assistance costs for 1996 elections. For prior year elections, 29 jurisdictions and 6 states reported data for costs they incurred to provide bilingual voting assistance. Generally, jurisdictions and states said they did not keep track of the costs they incurred to provide the minority language portion of their voting assistance. Further, they are not required to identify such costs. Few Jurisdictions and States Said They Identified Costs for Providing Bilingual Voting Assistance Covered jurisdictions and states are not required to maintain data on their costs of providing bilingual voting assistance. However, a small number of jurisdictions and states reported cost information for providing bilingual voting assistance. About 76 percent of the jurisdictions (see fig. 8) and 42 percent of the states that provided bilingual voting assistance were unable to determine the cost of doing so. Some jurisdiction officials said that their jurisdictions have provided bilingual assistance for so many years that it is just a part of their total election process and they did not bother to keep track of the bilingual assistance costs. Most of the jurisdictions that were unable to provide cost data, cited, as causes, the lack of specificity in (1) printers’ billing statements for election materials and (2) their accounting systems. In analyzing the jurisdictions’ responses, we noted that 135 of the 272 jurisdictions reported they were unable to provide cost data for providing written assistance but reported using only bilingual workers or volunteer assistants to provide oral assistance. Figure 9 shows the specific reasons 231 jurisdictions reported being unable to do so. In addition, we contacted three printers of election materials and ballots in Texas to determine whether they could provide information on the cost of publishing the minority language portion of the ballot. None of the printers contacted could provide the costs of the minority portion of the ballot. One printer estimated that for 1996, the minority language portion of the ballot comprised about 25 percent of the total cost of ballots. Jurisdictions’ Reported Costs for Elections in 1996 Of the 272 responding jurisdictions that reported providing bilingual voting assistance, 34 jurisdictions reported the total cost of providing such assistance, of which 6 jurisdictions said they provided oral assistance only but at no additional cost. In addition, 30 jurisdictions reported partial cost data. Table 2 shows the total costs jurisdictions reported they incurred to provide bilingual voting assistance under the act. In addition to the above jurisdictions, table 3 provides information on the 30 jurisdictions that were able to provide partial cost information. States’ Reported Costs for Elections in 1996 Of the 12 state respondents that reported providing bilingual voting assistance, Florida and Hawaii reported total bilingual voting assistance costs for the 1996 elections. Arizona, Massachusetts, Michigan, New Mexico, and Rhode Island provided partial cost data. Table 4 presents the cost data reported by the seven states for the 1996 elections. States’ and Jurisdictions’ Reported Costs for Prior Elections For prior election years 1992 to 1995, 29 jurisdictions and 6 states provided cost data. However, the cost data provided may not represent all bilingual costs. The bilingual assistance costs jurisdictions reported for prior years’ elections varied widely. For example, Central Falls City, RI, reported costs of $83 for 1992, $164 for 1993, $175 for 1994, but $0 for 1995. Los Angeles County, CA, reported costs of $451,800 for 1993, $764,900 for 1994, and $292,400 for 1995. Table 5 shows the prior year election costs reported by the jurisdictions. Similarly, the costs reported by the six states for prior election years varied. For example, for 1994, Hawaii reported costs of $610, while New Mexico reported costs of more than $70,000. Four of the states reported they did not incur any election costs in odd-numbered years, but other states did not provide cost information for a year. Table 6 shows the prior year election costs reported by the states. We are providing copies of this report to the Chairmen of the House and Senate Committees on the Judiciary and their respective Ranking Minority Members. We will also make copies available to others on request. Major contributors to this report are listed in appendix V. If you have any questions about this report, please call me on (202) 512-8777.
Pursuant to a congressional request, GAO reviewed aspects of the implementation of bilingual language provisons of the Voting Rights Act, focusing on: (1) the types of assistance jurisdictions provided for the 1996 general election; and (2) actual cost that covered jurisdictions incurred to provide bilingual voting assistance in 1996 and prior years, if available. GAO noted that: (1) of the 292 jurisdictions that responded to GAO's survey, 272 reported providing bilingual voting assistance for the 1996 general election; (2) of the 292 respondents, 213 said that they provided both written and oral bilingual voting assistance to their minority language voters, 45 said that they provided written assistance only, 14 said that they provided oral assistance only, and 20 said they did not provide any assistance; (3) with respect to the jurisdictions not providing any assistance, 5 said they tried, but were unable to identify individuals needing assistance, 13 said that no one needed assistance or that no one had ever sought assistance, and 2 believed that they had been exempted from providing assistance; (4) in addition, five jurisdictions and two states reported furnishing bilingual voting assistance to groups that the act did not require them to assist; (5) in addition to assistance provided by jurisdictions, states may also provide assistance, such as translation of state election propositions or translated sample ballots; (6) 12 of the 26 states that responded said that they furnished some bilingual voting assistance; (7) the 14 remaining states reported that they provided no bilingual voting assistance; (8) in addition, some states, such as California (CA) and New Jersey, have adopted their own laws requiring bilingual voting assistance; (9) as the act does not require covered jurisdictions and states to maintain data on the costs of providing bilingual, information provided by the surveyed jurisdictions and states on their costs was scant; (10) of the 272 jurisdictions that reported providing assistance in 1996, 208 were unable to provide information on their costs; (11) of the 64 jurisdictions that reported cost information, only 34 provided information on total costs and the remainder provided partial costs; (12) the 34 jurisdictions' reported costs varied greatly; (13) of the 12 states that provided assistance, only Hawaii and Florida reported their total costs for providing bilingual voting assistance in 1996; (14) Arizona, Massachusetts, Michigan, New Mexico, and Rhode Island (RI) reported partial cost data; (15) only 29 jurisdictions and 6 states provided some data on election year costs for 1992 to 1995; (15) moreover, the amounts jurisdictions reported spending on bilingual voting assistance in prior years varied widely; and (16) the amounts states reported also varied by year.
Background The HVBP program affects Medicare payments to approximately 3,000 acute care hospitals for the inpatient services provided to Medicare beneficiaries. Hospitals are included in the HVBP program if they are paid through Medicare’s Inpatient Prospective Payment System. Thus, hospitals not paid through this system, such as critical access hospitals, are not subject to payment adjustments by the HVBP program. By law, the HVBP program is budget neutral, which means that the total amount of payment increases, or bonuses, that it awards to hospitals deemed to provide higher quality of care must equal the total amount of payment reductions, or penalties, applied to hospitals deemed to provide lower quality of care. To accomplish this, CMS calculates each hospital’s payment adjustment percentage by applying a fixed percentage decrease, and then adding back percentage increases based on the hospital’s assessed quality performance in prior years. As specified in PPACA, the initial percentage reduction grew from 1.0 to 1.5 percent from fiscal year 2013 to fiscal year 2015, and will reach a maximum of 2 percent in fiscal year 2017. The percentage increases added back are based on a hospital’s performance on each quality measure included in the HVBP payment formula. For each of these HVBP quality measures, CMS considers both the results of a hospital’s absolute performance and the changes in its performance over time, and then counts the better result toward the hospital’s quality score. The total quality score is derived from a hospital’s performance on all the HVBP quality measures. If a hospital obtains a percentage increase or supplement from its HVBP total quality score that exceeds the initial percentage reduction, it receives a net increase, or bonus, from HVBP for that year. If the increase from its total quality score is smaller than the initial reduction, the hospital receives a net decrease, or penalty, in payments compared to what it otherwise would have received without the HVBP program. The HVBP quality measures are distributed across several different performance categories—known as domains—that comprise a set of related quality measures. The number of domains included in the formula has grown from two (clinical process and patient experience measures) in fiscal year 2013 to four (adding patient outcomes and efficiency to the original two). Each domain consists of multiple quality measures, except for efficiency which consists solely of the Medicare Spending per Beneficiary measure. Across all of the domains, the number of measures included in the HVBP payment formula has grown from 20 in fiscal year 2013 to 26 in fiscal year 2015. Before quality measures can be added to the HVBP formula, they must first have been publicly reported under the IQR program for at least one year. CMS makes adjustments each year— usually providing several years notice—to the measures to be included in the HVBP payment formula in future years and to the relative weights applied to the quality domains in calculating each hospital’s total quality score. For example, in fiscal year 2013, 70 percent of the total quality score was based on clinical process measures. In fiscal year 2015, clinical process measures represented 20 percent of the total score. (See appendix II for a list of all the IQR measures included in the HVBP program.) Once CMS calculates a hospital’s performance across all of the domains and subsequently determines its corresponding bonus or penalty, the inpatient Medicare payment for each discharged patient is adjusted up or down throughout the fiscal year based on the size of the hospital’s bonus or penalty. (For two hypothetical examples, see fig. 1.) Only a portion of the total Medicare payment is affected, however. For example, the HVBP bonus or penalty does not alter certain add-on payments, such as those that compensate hospitals for serving a disproportionate share of low- income patients or for providing medical education. As a result, hospitals caring for large proportions of low-income Medicare or Medicaid patients and major teaching hospitals have a lower proportion of their total Medicare payments affected by their HVBP bonus or penalty, compared to other hospitals that do not receive these add-on payments. Most Hospitals Received Bonuses or Penalties of Less than Half of One Percent Each Year, with Generally Similar Results for Small and Safety Net Hospitals Most hospitals received a bonus or penalty from the HVBP program of less than 0.5 percent of applicable Medicare payments in each of the first three years of the program. Small hospitals and hospitals with better financial performance generally had higher payment adjustments, that is, larger bonuses or smaller penalties. Among the subgroups we analyzed, we found that safety net hospitals received lower payment adjustments compared to hospitals overall, but the gap narrowed over time. Small rural and small urban hospitals had similar or better results than hospitals overall. A Large Majority of Hospitals Received Bonuses or Penalties of Less than Half a Percent Each Year In each of the HVBP program’s first three years, a large majority of hospitals—between 74 percent and 93 percent—received a bonus or penalty of less than 0.5 percent. (See fig. 2.) Roughly the same number of hospitals received bonuses and penalties, with more bonuses awarded in fiscal year 2013 and fiscal year 2015, and more penalties awarded in fiscal year 2014. The amount of the annual median bonuses and median penalties increased slightly each year. The median bonus in 2015 was 0.32 percent of applicable Medicare payments and the median penalty was 0.26 percent. (See table 1.) In dollar terms, most of these annual bonuses or penalties were less than $50,000. For example, in fiscal year 2015, 52 percent of hospitals received bonuses or penalties that led to payment adjustments of less than $50,000, and 72 percent of hospitals had payment adjustments of less than $100,000. The size of bonuses or penalties, when measured in dollars, is a function of both the percentage bonus or penalty and the total amount of applicable Medicare payments a hospital is owed. In the aggregate, the HVBP program redistributed about $140 million dollars from hospitals that received penalties to hospitals that received bonuses in 2015. Small Hospital Size and Better Financial Performance Were Associated with Higher Payment Adjustments We found that smaller hospitals generally had higher payment adjustments—that is, larger bonuses or smaller penalties—than larger hospitals in the HVPB program’s first three years. Specifically, hospitals with 60 beds or fewer had the highest median payment adjustments in fiscal years 2013 and 2015, from among the five different hospital size categories (by number of beds) that we analyzed. In fiscal year 2015, the overall median payment adjustment for hospitals with 60 beds or fewer was a bonus of 0.38 percent. In contrast, hospitals in the categories with the largest number of beds—those encompassing 201 to 350 beds and more than 350 beds—had the lowest median payment adjustments in fiscal year 2015. (Hospitals with more than 350 beds also had the lowest median payment adjustments in fiscal year 2013, but the differences among several of the categories were small.) See appendix III for the results of our analysis of hospital bed size categories. In addition, we found that hospitals with better financial performance, as measured by net income, generally had higher payment adjustments under the HVBP program. In each of the HVBP program’s first three years, hospitals with the highest net income had higher payment adjustments than hospitals with negative net income. Hospitals with net income of more than 5.0 percent received the highest median bonuses from among the seven net income categories that we analyzed. (See appendix IV.) Hospitals with lowest net income from among the categories we analyzed—negative margins of greater than -5.0—had among the lowest median payment adjustments in the HVBP program in fiscal years 2013 and 2014. However, the pattern for this group of hospitals with the lowest net income did not continue for fiscal year 2015, as these hospitals had median payment adjustments that were higher than those of hospitals in some other net income categories. Compared to Hospitals Overall, Safety Net Hospitals Received Lower Payment Adjustments and Small Urban Hospitals Received Higher Payment Adjustments Safety net hospitals consistently had lower median payment adjustments—that is, smaller bonuses or larger penalties—than hospitals overall. These adjustments ranged between .07 and .12 percentage points lower in the program’s first three years, with the smallest gap coming in fiscal year 2015. (See table 2.) Safety net hospitals exceeded hospitals overall in scores for efficiency but had lower scores each year for the other three HVBP domains. (See appendix V.) Therefore, one reason why the gap narrowed in fiscal year 2015 was the addition of the efficiency domain to the HVBP formula in that year. In contrast, small urban hospitals had higher median payment adjustments—that is, larger bonuses or smaller penalties—than hospitals overall during the program’s first three years. The greatest difference was in fiscal year 2015, when small urban hospitals had a median payment adjustment 0.22 percentage points higher than hospitals overall. Small urban hospitals had generally higher scores across each of the HVBP program’s performance domains compared to hospitals overall in all three years, with the exception of the patient outcomes domain in fiscal year 2014. Compared to safety net and small urban hospitals, small rural hospitals’ median payment adjustments more closely mirrored those of hospitals overall. In two of the program’s first three years, the median payment adjustment for small rural hospitals was within 0.02 percentage points of the median for all hospitals, before increasing relative to hospitals overall in fiscal year 2015. Small rural hospitals generally had higher median scores on the patient experience and cost efficiency domains than hospitals overall and had lower median scores on the clinical processes and patient outcomes domains. As with hospitals overall, most safety net, small urban, and small rural hospitals received bonuses or penalties of less than 0.5 percent in each of the program’s first three years. (See appendix VI.) However, the proportion of these hospitals with bonuses or penalties of less than 0.5 percent was generally lower than for hospitals overall, with the largest differences in fiscal year 2015. For example, 59 percent of small urban hospitals received payment adjustments of less than 0.5 percent in fiscal year 2015—compared to 74 percent for hospitals overall. In the same year, about 36 percent of small urban hospitals received bonuses of 0.5 percent or greater, compared to 18 percent of hospitals overall. Most Quality Trends Have Not Shifted Noticeably Since Implementation of the HVBP Program, Although the Program Continues to Evolve Our analysis found no apparent shift in HVBP quality measure trends during the initial years of the program, but such shifts could emerge over time as the program implements planned changes. The same pattern held for most quality measures not included in the HVBP program. The exception was readmissions, where the performance of the same group of hospitals showed a clear shift in trend towards improvement during the initial years of the HVBP program. No Shift in Trends Was Apparent for the HVBP’s Quality Measures in the Program’s Initial Years, but Such Shifts Could Emerge Over Time As the Program Implements Planned Changes While the HVBP program aims to provide an incentive to improve hospitals’ quality of care, preliminary analysis of information from 2013 and 2014—the two years of quality measure results after the program’s implementation that were available at the time of our analysis—shows that it did not noticeably alter the existing trends in hospitals’ performance on any of the quality measures used to determine HVBP payment adjustments that we examined. This lack of apparent change applied to all of the clinical process, patient experience, and outcomes measures included in the program’s payment formula that had sufficient available data points for us to assess. In general, trends observed for each measure before the HVBP program took effect in October 2012 remained largely unchanged after the program’s implementation, as shown by changes over time in the median hospital quality score for each measure. On clinical process measures, hospitals showed improvement that began before implementation of the HVBP program. These measures assess the extent to which hospitals correctly follow certain well-accepted processes to treat patients, for example by selecting an appropriate initial antibiotic for a pneumonia patient. The median scores for all of these clinical process measures increased prior to the implementation of the HVBP program. (See fig. 3.) As a result, by the start of the HVBP program in October 2012, the median scores for all clinical process measures included in the program were already at or close to 100 percent, indicating that hospitals consistently followed these treatment procedures before the beginning of the HVBP program, and so there was limited opportunity for hospitals to improve on these measures after the program was implemented. As previously noted, CMS officials have adjusted the HVBP formula so that the weight given to clinical process measures has decreased over time, from 70 percent in 2013 to 20 percent in 2015, with an additional decrease to 5 percent by 2017. For patient experience measures—on which, unlike clinical process measures, hospital scores were not at nor close to 100 percent— hospitals showed steady, incremental improvement on the measures both before and after implementation of the HVBP program. These measures reflect the responses of hospital patients to survey questions about the quality of their hospital experience, such as how well their pain was controlled. For each of the HVBP patient experience measures, the median hospital score trended steadily upward or, in a few cases, remained the same from one reporting period to the next, with no substantial shift that coincided with the start of the HVBP program in October 2012. (See fig. 4.) On the three HVBP patient outcomes measures we analyzed—each of which measures patient mortality that may be related to hospital quality— the overall trends were mixed, but remained largely consistent both before and after implementation of the HVBP program. Hospitals showed steady improvement (i.e., a decrease) in the rate of mortality due to heart attack, both before and after HVBP program implementation. On the other hand, rates of mortality due to heart failure and pneumonia stayed roughly constant over the same time period, increasing slightly prior to the implementation of HVBP and then possibly leveling off. (See fig. 5.) All three mortality measures—heart attack, heart failure, and pneumonia— use information from Medicare claims data to track patient mortality within 30 days of a hospital admission and risk adjust the results based on patient characteristics. Small rural, small urban, and safety net hospitals sometimes performed better or worse than hospitals overall on one HVBP quality measure or another across the three domains, but these differences in relative performance did not change noticeably with the implementation of the HVBP program. We found a generally consistent pattern in which, for each of these individual measures, any difference in performance between hospitals in the subgroup and hospitals overall during the period before the program either disappeared by the time the program took effect or remained relatively constant in the following time period. On clinical process measures included in the HVBP program, small rural, small urban, and safety net hospitals generally matched hospitals overall with very high performance before HVBP was implemented. On patient experience measures included in the HVBP program, small rural and small urban hospitals performed slightly better than hospitals overall— both before and after its implementation—while safety net hospitals performed slightly worse. On patient outcomes measures included in the HVBP program, small urban hospitals generally matched the performance of hospitals overall, both before and after its implementation, while safety net hospitals (on the measures for heart attack and pneumonia mortality) and small rural hospitals (on all three mortality measures) performed slightly worse. These trends in the HVBP quality measures reflect the relatively short period of time after the program was implemented in October 2012, which leaves open the possibility that more noticeable changes could emerge over a longer period of time. Such shifts in quality trends may develop slowly for two reasons. First, hospitals may take time to implement their responses to the program, and these responses, once implemented, may take additional time to achieve results. Second, the HVBP program has evolved substantially over time and will continue to do so, and therefore its effects on quality may also be different. For example, the amount of Medicare payments at risk will increase from 1.0 percent in fiscal year 2013 to 2.0 percent in fiscal year 2017 and after. In addition, new quality measures are being added to the program, and the quality measure domains have increased from two to four, with a fifth—safety—due to be added to the HVBP formula in fiscal year 2017. Moreover, the weights attached to those domains, and therefore the relative effect each domain has on a hospital’s total quality score, have also shifted substantially. That is particularly true of the clinical process domain, on which hospitals did not have much room for improvement, as most hospitals already received scores at or close to 100 percent before the HVBP program was implemented. As we previously noted, this domain will drop from 70 percent of the total quality score in fiscal year 2013 to 5 percent in fiscal year 2017. With more quality data collected over a longer period of time following the implementation of the HVBP program, it may be possible to detect more subtle and delayed effects of the program. Quality Measures Not Included in the HVBP Program Also Showed No Apparent Shift in Trends During the Same Initial Years, Except for Readmissions Most of the IQR quality measures we examined that were not included in the HVBP program had trends that were similar to those in the program. Specifically, trends for non-HVBP clinical process measures were very similar to trends for HVBP clinical process measures, in that hospitals had improved on these measures and reached a high level prior to the start of the HVBP program. (See fig. 6.) In addition, the one IQR patient experience measure not incorporated into the HVBP program, a measure indicating whether patients would recommend the hospital, exhibited a trend very similar to that of the HVBP patient experience measures shown earlier in fig. 4. The other non-HVBP measures that we examined were the 30-day hospital readmissions rates for heart attack, heart failure, and pneumonia; on all three measures, hospitals showed a different pattern—a clear initial shift in trend toward improved quality in the period leading up to the implementation of the HVBP program. These three measures track the percentage of patients with each condition that are readmitted to a hospital within 30 days after being discharged. Such readmissions may be an indication that patients’ recoveries from their initial hospitalizations were incomplete or that patients received inadequate care after their discharges. Readmissions for all three conditions remained largely unchanged from year to year through the end of 2009; afterwards, each declined noticeably around 2010 and continued to decline over the next two years. (See fig. 7.) The three non-HVBP readmission measures are targeted by the separate Hospital Readmissions Reduction program. Some analysts who have reviewed this program noted that this initial shift in trend toward higher quality on these measures took place after the law that established the readmissions reduction program, PPACA, was passed in 2010. They noted that hospitals had an opportunity to implement strategies to reduce their readmissions before the program began to impose its penalties in October 2012. While the Hospital Readmissions Reduction program took effect at the same time as the HVBP program, the difference in the observed trend for the measures targeted by the readmissions program, compared to the HVBP program, may in part reflect differences in the design of the two programs. These differences include (1) focusing on just readmission rates (in contrast to a complex mix of process, patient experience, outcome, and efficiency measures for the HVBP program), (2) not assessing hospitals on their levels of improvement, but instead focusing only their level of readmissions (with adjustments for patient demographics), and (3) providing only penalties, rather than bonuses, which have generally been larger in magnitude than penalties provided under HVBP. As with the HVBP quality measures, these trends reflect the initial years of the Hospital Readmissions Reduction program, and they could change with time. Moreover, there could be other factors beyond the implementation of this program that influenced the decline in heart attack, heart failure and pneumonia readmissions over that time period. Nonetheless, the conjunction of the drop in hospital readmission rates and the introduction of a financial incentive program targeting those rates provides some additional indication that financial incentives of the sort broadly offered by programs like the HVBP program and the Hospital Readmissions Reduction program may, under certain circumstances, promote enhanced quality of care. However, a clear understanding of the extent of that impact, and the circumstances under which it may be maximized, will depend on the results of future research. Hospital Officials Reported That the HVBP Program Helped Reinforce Ongoing Quality Improvement Efforts but Did Not Lead to Major Changes Officials from selected hospitals reported that the HVBP program reinforced their ongoing quality improvement programs without leading to major changes. In addition, they cited a variety of factors that affected their capacity to make quality improvements, though they said that these factors were not directly influenced by the HVBP program. HVBP Program Reinforced Ongoing Quality Improvement Programs at Selected Hospitals Officials from eight selected hospitals we contacted reported that the actions that their hospitals took in response to the HVBP program focused on reinforcing ongoing efforts to improve quality. Prior to the HVBP program, each of these hospitals had established a quality improvement program that sought to improve the hospital’s performance on quality measures targeted by Medicare’s IQR program, as well as, in some cases, additional quality measures specified by private insurers, organizations of peer hospitals, or the hospital itself. Officials from the selected hospitals reported a variety of specific responses to the HVBP program. These responses reflected the hospitals’ differing individual circumstances and generally involved incremental adjustments to existing quality improvement programs, rather than major changes. The hospital officials described two ways in particular that the HVBP program reinforced these existing hospital efforts: (1) elevating the profile of the HVBP quality measures and thereby providing hospitals with a way to focus their quality improvement efforts, and (2) motivating hospital officials to increase the resources directed towards quality improvement. Some officials at the selected hospitals noted that one key effect of the HVBP program was to elevate the profile of those IQR measures included in the HVBP formula. These officials characterized the HVBP measures as a set of “national quality goals” which allowed them to benchmark their own performance against that of other hospitals. Hospital officials pointed in particular to the outcome measures in the HVBP program as influencing efforts to expand their hospitals’ ongoing quality improvement efforts beyond the traditional focus on clinical process measures. However, these officials noted that this increased emphasis on outcomes measures was part of a larger transformation occurring throughout the health care system. According to the officials, a range of private sector value-based purchasing and other related initiatives were leading them in the same direction, and therefore it was difficult for hospital officials to differentiate actions taken in response to the HVBP program from responses to these other initiatives. Officials at the selected hospitals also credited the HVBP program with helping to motivate them to increase the resources directed at quality improvement. Several of these hospital officials described how quality improvement was a resource-intensive effort, in which one key resource was skilled staff who could collect, analyze, and act on timely, accurate and relevant data. Hospital officials reported that they had increased the number of such staff in recent years. Some officials suggested that the linkage of hospital quality to payments, such as through the HVBP program and comparable private sector initiatives, had helped to justify that shift in staff resources. However, according to hospital officials, this increase in staff contributed broadly to each hospital’s quality improvement efforts, rather than being limited to the particular HVBP quality measures. Officials at the selected hospitals emphasized that their ability to identify and address quality issues depended on their obtaining data about how their hospital was performing on relevant measures at the current time. Because the quality information provided by CMS to both hospitals and the public reflects patient care provided months or years in the past, these hospital officials found that they needed to generate more timely quality information on their own, either internally or through private vendors. This information allowed them to assess their current quality problems and also determine if the steps that they took to address problems were working. However, several officials at the selected hospitals noted that their ability to generate more current information was limited to certain types of quality measures, primarily those focused on clinical processes and patient experience. By contrast, many of these hospital officials said that they could not replicate the outcome measures that CMS calculated from Medicare claims—as those measures often reflected what happened to patients after they left the hospital and are therefore based in part on data not readily available to hospitals. These hospital officials reported that improving their performance on patient outcomes was more challenging without accurate and current data. Just as hospitals had quality improvement programs in place prior to the HVBP program, their efforts to improve efficiency were also already growing when the HVBP program took effect. According to some officials at the selected hospitals, the addition of the Medicare Spending per Beneficiary measure to the HVBP program formula, with the introduction of the efficiency domain in fiscal year 2015, did little to affect those efforts. In part that was because, like the HVBP outcome measures, hospital officials reported that they could not independently calculate their Medicare Spending per Beneficiary scores, nor did they clearly understand what they would need to do to improve these scores. Instead, these hospital officials reported that they have proceeded with a range of more general efforts to improve efficiency by reducing their costs without impairing quality. These include initiatives to lower supply costs by standardizing the selection of medical devices, such as artificial joints, as well as systemic assessments of work processes designed to streamline their delivery of care. Officials at the selected hospitals reported varying levels of intensity in the pursuit of these efficiency goals, depending on the particular circumstances of their hospital. However, according to these officials, the impetus behind these efficiency efforts came from an increased focus for both public and private payers on controlling the growth of hospital costs. Numerous officials at the selected hospitals stated that their efforts to improve efficiency were aimed at securing the economic survival of their hospital in an increasingly challenging health care marketplace, rather than responding to a specific incentive from the HVBP program. A Variety of Factors Affected Selected Hospitals’ Capacity to Make Quality Improvements, Which Were Not Directly Affected by the HVBP Program The issue that officials from most of the selected hospitals we contacted frequently identified as a barrier to quality improvement efforts was the hospital’s information technology (IT) system, especially its electronic health record. Some of these officials described how implementing a new IT system slowed down their work as staff grappled with learning the system, how limitations to the system prevented the production of desired performance-related data, and how the IT system diverted significant hospital resources into implementing and maintaining the system— resources that could otherwise have been applied elsewhere, such as to quality improvement efforts. While some hospital officials we spoke to described the difficulties associated with implementing and effectively utilizing their IT systems, some highlighted the benefits of those systems as a tool for enhancing quality. These officials stated that physicians and other staff had come to rely upon their IT systems over time and that these systems helped their clinicians to better manage and coordinate care. Others said that their IT systems helped them to better manage their quality performance efforts, such as through built-in clinical process reminders in their electronic health record systems or by facilitating the collection of the patient clinical data needed for quality measures. Some other factors that officials at the selected hospitals identified as having a negative effect on their ability to make quality improvements included a lack of financial resources, the absence of timely and easily interpretable quality performance data, and personnel issues. These hospital officials told us that reduced reimbursement rates and the financial demands of a variety of other priorities limited the resources available for desired quality improvement efforts. Some of these officials also discussed challenges associated with interpreting performance data received from CMS, in part due to the delay between when the actions or outcomes measured actually occur and when the resulting scores are reported back to the hospital. Personnel issues—including limited physician engagement or a shortage of staff with needed quality improvement-related skills—were also described by some officials as having a negative effect on quality improvement efforts. Some officials at both small rural and safety net hospitals we contacted cited particular patient population and community factors as barriers to their quality improvement efforts. For example, some safety net hospital officials spoke about difficulties that arise from serving a disproportionate share of patients with characteristics—such as low incomes, mental health issues, language barriers, or little access to transportation—which officials said make it harder to coordinate care and achieve better outcomes. In addition, some officials at safety net hospitals stated that a lack of available external resources in their community—such as mental health services, social services, and other health care services external to the hospital—or a lack of coordination between those resources make it harder to coordinate care and achieve better outcomes. Some small rural hospital officials also described similar barriers to improving quality of care, highlighting in particular the limited availability of mental health and social services in their community. Collaboration was a factor that numerous officials at the selected hospitals mentioned as having a beneficial effect on quality improvement efforts, and these officials discussed a range of different forums they had found for collaborative learning. Some cited the usefulness of their area’s Hospital Engagement Network in providing a forum for sharing best practices. Others discussed the benefits of learning from regional or state-based networks that they accessed through their state hospital association or another convening body. Officials from hospitals that are part of a hospital system spoke about collaboration within their system. While officials at the selected hospitals outlined for us the many factors they believed affected their quality improvement efforts, they did not indicate that these factors were specific to the HVBP program. Instead, these hospital officials said they were working to improve quality for a number of reasons, including responding to the HVBP program, and that these factors applied to their ongoing quality improvement efforts as a whole. Consequently, these officials characterized these factors as inhibiting or facilitating each hospital’s quality improvement efforts broadly rather than being factors that specifically affected or were affected by the implementation of the HVBP program. Agency Comments We provided a draft of this report to the Department of Health and Human Services for review, which includes CMS. The department provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Inpatient Quality Reporting Measures Included in GAO’s Analysis The following table lists the Inpatient Quality Reporting (IQR) program measures included in our analysis of quality trends before and after the introduction of the Hospital Value-based Purchasing (HVBP) program. The table identifies which quality domain each measure belongs to; specifies whether the measure was used to calculate HVBP scores anytime during fiscal years 2013, 2014, or 2015; provides the IQR code and description that designate the measure under the IQR program; and indicates the number of data points available for our analysis, in which we assessed possible shifts in trends from the period before the HVBP program came into effect through the period after its implementation. Most of these measures have data points reported quarterly to the IQR program, with the exception of the patient outcome measures (mortality and readmissions), which are reported annually. Appendix II: Quality Measures Included in the Hospital Value-based Purchasing Program, Fiscal Years 2013 through 2017 Measure Included in Fiscal Year Description Heart attack patients received fibrinolytic agent within 30 minutes of hospital arrival Heart attack patients received percutaneous coronary intervention within 90 minutes of hospital arrival Heart failure patients received discharge instructions Blood culture performed in the emergency department prior to first antibiotic received in hospital for pneumonia patients Appropriate initial antibiotic selection for community acquired pneumonia patient Prophylactic antibiotic received within 1 hour prior to surgical incision Received prophylactic antibiotic consistent with recommendations for surgical patients Prophylactic antibiotics discontinued within 24 hours after surgery end time (48 hours for cardiac surgery) Patient Experience Measure Included in Fiscal Year Efficiency Appendix III: Median Hospital Value-based Purchasing Payment Adjustments by Bed Size, Fiscal Years 2013 through 2015 Appendix IV: Median Hospital Value-based Purchasing Payment Adjustments by Net Income, Fiscal Years 2013 through 2015 Appendix V: Median Hospital Value-based Purchasing Domain Scores by Hospital Type, Fiscal Years 2013 through 2015 Appendix VI: Bonuses and Penalties under Hospital Value-based Purchasing by Hospital Type, Fiscal Years 2013 through 2015 Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Will Simerl, Assistant Director; Zhi Boon; Krister Friday; Colbie Holderness; Eric Peterson; David Plocher; Vikki Porter, and Steve Robblee made key contributions to this report. Related GAO Products Health Care Transparency: Actions Needed to Improve Cost and Quality Information for Consumers. GAO-15-11. Washington, D.C.: October 20, 2014. Electronic Health Record Programs: Participation Has Increased, but Action Needed to Achieve Goals, Including Improved Quality of Care. GAO-14-207. Washington, D.C.: March 6, 2014. Health Care Quality Measurement: HHS Should Address Contractor Performance and Plan for Needed Measures. GAO-12-136. Washington, D.C.: January 13, 2012. Hospital Quality Data: HHS Should Specify Steps and Time Frame for Using Information Technology to Collect and Submit Data. GAO-07-320. Washington, D.C.: April 25, 2007. Hospital Quality Data: CMS Needs More Rigorous Methods to Ensure Reliability of Publicly Released Data. GAO-06-54. Washington, D.C.: January 31, 2006.
The HVBP program, which the Centers for Medicare & Medicaid Services (CMS) administers, annually evaluates individual hospital performance on a designated set of quality measures related to inpatient hospital services and, based on those results, adjusts Medicare payments to hospitals in the form of bonuses and penalties. The HVBP program was enacted in 2010 as part of the Patient Protection and Affordable Care Act (PPACA). The first HVBP payment adjustments occurred in fiscal year 2013. PPACA included a provision for GAO to assess the HVBP program's impact on Medicare quality and expenditures, including the HVBP program's effects on small rural, small urban, and safety net hospitals. This report evaluates the initial effects of the HVBP program on: (1) Medicare payments to hospitals, (2) quality of care provided by hospitals, and (3) selected hospitals' quality improvement efforts. To determine these initial effects of the HVBP program, GAO analyzed CMS data on bonuses and penalties given to hospitals in fiscal years 2013 through 2015 as well as data on hospital quality measures collected by CMS from 2005 through 2014, the most recent year available. GAO also interviewed officials with eight hospitals that participated in the HVBP program. Hospitals were selected to include safety net, small urban, and small rural hospitals, as well as those that were not part of any of these subgroups. The Department of Health and Human Services, which includes CMS, reviewed a draft of this report and provided technical comments, which GAO incorporated as appropriate. The bonuses and penalties received by most of the approximately 3,000 hospitals eligible for the Hospital Value-based Purchasing (HVBP) program amounted to less than 0.5 percent of applicable Medicare payments each year. GAO found that safety net hospitals, which provide a significant amount of care to the poor, consistently had lower median payment adjustments—that is, smaller bonuses or larger penalties—than hospitals overall in the program's first three years. However, this gap narrowed over time. In contrast, small urban hospitals had higher median payment adjustments each year than hospitals overall, and small rural hospitals' median payment adjustments were similar to hospitals overall in the first two years and higher in the most recent year. GAO's analysis found no apparent shift in existing trends in hospitals' performance on the quality measures included in the HVBP program during the program's initial years. However, shifts in quality trends could emerge in the future as the HVBP program continues to evolve. For example, new quality measures will be added, and the weight placed on clinical process measures—on which hospitals had little room for improvement—will be substantially reduced. For many quality measures not included in the HVBP program, GAO also found that trends in hospitals' performance remained unchanged in the period GAO reviewed, but there were exceptions in the case of three measures that are part of a separate incentive program targeting hospital readmissions. This program focuses exclusively on readmissions and imposes only penalties. The timing of changes in readmission trends provides some indication that the use of financial incentives in quality improvement programs may, under certain circumstances, promote enhanced quality of care. However, understanding the extent of that impact depends on the results of future research. Officials from selected hospitals GAO interviewed reported that the HVBP program generally reinforced ongoing quality improvement efforts, but did not lead to major changes in focus. In addition, hospital officials cited a variety of factors that affected their capacity to improve quality. For example, officials from most hospitals GAO contacted reported challenges related to using information technology (IT) systems—including electronic health records—to make quality improvements. In contrast, other hospital officials said their IT systems aided their quality performance efforts, such as by helping to collect clinical data needed to track progress on quality measures. Hospital officials described such factors as affecting their hospital's quality improvement efforts as a whole, rather than being specifically linked to implementation of the HVBP program.
Background In February 2011, Boeing won the competition to develop the Air Force’s next generation aerial refueling tanker aircraft, the KC-46. Boeing was awarded a fixed price incentive (firm target) contract for development because KC-46 development is considered to be a relatively low-risk effort to integrate mostly mature military technologies onto an aircraft designed for commercial use. The contract is for the design, manufacture, and delivery of four test aircraft and includes options to manufacture the remaining 175 aircraft. The contract requires Boeing to deliver 18 operational aircraft by August 2017 and specifies that Boeing must correct any required deficiencies and bring development and production aircraft to the final configuration at no additional cost to the government. The contract limits the government’s financial liability and provides the contractor incentives to reduce costs in order to earn more profit. Barring any changes to KC-46 requirements by the Air Force, the contract specifies a target price of $4.4 billion and a ceiling price of $4.9 billion, at which point Boeing must assume responsibility for all additional costs. As of December 2014, Boeing and the program office estimated costs would be over the ceiling price by about $380 million and $1.4 billion, respectively. The program office estimate is higher because it includes additional costs associated with performance as well as cost and schedule risk. In all, the Air Force expects 13 production lots of aircraft to be delivered. The contract includes firm fixed price contract options for the first production lot in 2015 and the second production lot in 2016, and options with not-to-exceed firm fixed prices for production lots 3 through 13. According to program officials, Boeing plans to use a combination of development and production aircraft to meet the contractual requirement to deliver 18 aircraft by August 2017. Boeing plans to modify the 767 aircraft in two phases to produce a militarized aerial refueling tanker. In the first phase, Boeing is modifying the 767 by adding a cargo door and an advanced flight deck display borrowed from its new 787, and calling this modified version the 767-2C. The 767-2C will be built on Boeing’s existing production line. In the second phase, the 767-2C will proceed to the finishing center to become a KC-46. It will be militarized by adding aerial refueling equipment, an aerial refueling operator’s station that includes panoramic three-dimensional displays, and threat detection and avoidance systems. The aerial refueling equipment will allow for two types of refueling to be employed in the same mission—a refueling boom that is integrated with a computer assisted control system, and a permanent hose and drogue refueling system. The boom is a rigid, telescoping tube that an operator on the tanker aircraft extends and inserts into a receptacle on the Air Force fixed-wing aircraft being refueled. Air Force helicopters and all Navy and Marine Corps aircraft refuel using the “hose and drogue” system, which involves a long, flexible refueling hose stabilized by a drogue (a small windsock) at the end of the hose. See Figure 1 for a depiction of the conversion of the 767 aircraft into the KC-46 tanker with the boom deployed. The Federal Aviation Administration has previously certified Boeing’s 767 commercial passenger airplane and will certify the design for both the 767-2C and the KC-46. The Air Force is responsible for certifying the KC- 46 and its military systems. The Air Force will also verify that the KC-46 systems meet contractual requirements and that the KC-46 and various receiver aircraft are certified for refueling operations. Program Cost Estimates Have Decreased and It is on Track to Meet Performance Goals KC-46 total program acquisition cost estimates (development, procurement, and military construction costs) have declined from $51.7 billion to $48.9 billion—$2.8 billion or about 5.4 percent—since the program started in February 2011. Most of the estimated decline in costs is due to fewer than expected engineering changes, savings from a competitively awarded aircrew training system contract, and changes in military construction assumptions. The program office projects that the KC-46 will meet all key performance goals, including providing fuel to other aircraft, by the end of development. Cost Estimates Have Declined The total cost to develop, procure, and field the KC-46 has declined by about $2.8 billion from the February 2011 baseline, a 5.4 percent decrease. The decrease is comprised of a reduction of approximately $579 million in development funding, $1.2 billion in procurement funding, and $980 million in military construction funding. Average program acquisition unit costs have declined by the same percent because quantities have remained the same. Table 1 summarizes the initial and current estimated quantities and costs for the KC-46 program. Both the development and procurement cost estimates have declined, in part, because there have not been any major engineering changes since a successful critical design review or significant technical risks that program officials believe need to be addressed moving forward. In addition, the government competitively awarded a contract for an aircrew training system at a lower price than originally projected. The current development cost estimate of approximately $6.6 billion reflects a decrease of about $579 million from the original estimate. The estimate includes $4.9 billion for the aircraft development contract and four test aircraft, $300 million for aircrew and maintenance training systems, and $1.4 billion for other government costs such as program office support, test and evaluation support, and other developmental risks related to the aircraft and training systems. The procurement cost estimate of $39 billion is about $1.2 billion less than the original estimate and will be used to procure 175 production aircraft, initial spares, and other support equipment. The military construction estimate of $3.3 billion includes the projected costs to build or modify aircraft hangars, maintenance and supply shops, and other facilities to house and support the KC-46 fleet. The estimate is $980 million less because of changes in military construction plans. For example, the program will now modify more facilities and construct fewer ones. In addition, the Air Force has decided to pay for some military construction activities out of a central account rather than using program funds. Program on Track to Meet Key Performance Goals The program office projects that the KC-46 aircraft will meet all of its key performance goals, including receiving fuel from other tankers; providing fuel to about 36 receiver aircraft, according to an Air Force official’s projection; and having a certain amount of fuel to offload at various distances. According to program officials, the current assessment is based on their engineering expertise and the level of effort necessary to meet the requirements. Ultimately, these performance goals will be validated primarily through ground and flight testing. Table 2 includes a description of the program’s key performance goals. In June 2013, the Air Force Operational Test and Evaluation Center published its own independent evaluation of whether the KC-46 aircraft was on track to meet the key performance goals. The center identified a few issues, such as drogue hose instability and 3-D display anomalies that could affect Boeing’s ability to meet the tanker aerial refueling capability key performance parameter. In general though, the center agreed that the program was on track to meet all of the goals based on its interviews with subject matter experts and examination of design documents and laboratory test results. Program officials told us that the center’s next assessment is scheduled to be issued in the fall of 2015 and will be based on flight, ground, and laboratory test data. Boeing has also developed a set of technical performance measures to gauge its progress towards meeting contract specifications and three of the key performance goals, including those related to fuel offload, reliability and maintainability, and operational availability. For example, one measure related to the amount of fuel that the aircraft can carry certain distances (fuel offload versus radius) tracks operational empty weight because, in general, every pound of excess weight equates to a corresponding reduction in the amount of fuel the aircraft can carry to accomplish its primary mission. Table 3 describes the seven technical performance measures, depicts the minimum contractual specification or target for each, and identifies the current status as of December 2014. The program office is projecting that it will meet each of the technical performance measures. For example, program officials currently project that the aircraft will meet the weight target of 204,000 pounds. The program office assesses the measures on a monthly basis, relying on data from testing, models and simulations, prior tanker programs, and actual data (such as aircraft weight). Wiring Issues Have Impacted the Program Schedule and Other Challenges Pose Risk to the Flight Test Pace Due to wiring problems, Boeing has had scheduling delays in delivering the first developmental aircraft, even though it met all of its program milestones leading up to the critical design review in July 2013. In addition to using up almost all of the 5 month schedule margin, these problems have led Boeing and the program office to delay the first flight of the first development aircraft by almost 7 months to December 2014, and by 2 months to October 2015. As the KC-46 low-rate production decisiona result of the wiring problems, Boeing only completed 3.5 hours of flight testing in 2014, compared to nearly 400 flight test hours it planned to conduct. Figure 2 illustrates the delays to key milestones since the program began in February 2011. With the program office’s approval, Boeing revised its nearly 2,400 development flight test hour plan to account for wiring-related delays and to focus on demonstrating key KC-46 aerial refueling capabilities that are required for the production decision. Under the revised schedule, Boeing will now complete roughly 22 percent of development flight testing prior to the low-rate production decision compared to its original plan of 66 percent, providing DOD with less flight test knowledge at this program milestone. In addition, only 3 test months will be on a KC-46 prior to the decision compared to the original plan of 13 months. Other development challenges, including late delivery of parts, software defects, and assumptions related to flight test cycle times pose additional risk to the flight test pace needed to demonstrate aerial refueling capabilities. These challenges could result in additional schedule delays. Program Has Experienced Schedule Delays Due to Wiring Problems Boeing discovered wire separation issues while it was manufacturing the first development aircraft, which were caused by an inaccurate wiring design. After discovering this in the spring of 2014, Boeing spent six weeks conducting an audit to identify the scope of the problem and develop potential fixes. Wiring on the first development aircraft was over 90 percent complete when the audit started. Boeing officials told us that the audit found thousands of wire segments that needed to be changed. Boeing officials estimate that these changes impacted about 45 percent of the 1,700 wire bundles on the aircraft. Boeing suspended wiring installation on the remaining three development aircraft for several months while it worked through the wiring issues on the first development aircraft. Boeing officials told us that they have resolved the wire separation issues and have resumed manufacturing the second development aircraft. Due to its fixed price contract, Boeing bore the cost to fix the wiring issues, which Boeing officials estimated at approximately $40 million. As a result of these problems, Boeing did not execute the first year of the flight test program as planned, flying only 3.5 hours in calendar year 2014 compared to its plan of flying nearly 400 hours, about 1 percent. Boeing had planned to use four aircraft for flight test activities in 2014; however, it was only able to complete one flight at the end of December 2014 on one aircraft—a 767-2C. That aircraft is not scheduled to make another flight until April 2015 because Boeing has to perform additional work, such as completing ground testing and installing modified body fuel tanks. Boeing projects that the second development aircraft—a KC-46 tanker—will begin testing in June 2015. Moving Forward, Pace of Revised Development Flight Test Schedule May Be Too Optimistic Given Remaining Challenges Boeing revised the development test schedule to acknowledge the wiring- related delays. In doing so, it significantly reduced testing on each aircraft prior to the October 2015 low-rate production decision than originally planned. Figure 3 depicts the decrease in the amount of flight testing prior to the low-rate production decision. Under the baseline schedule, Boeing would have completed 36 months of flight testing across the four development aircraft prior to the low-rate production decision. This would have enabled Boeing to complete about 66 percent of the nearly 2,400 development flight test hours, including aerial refueling demonstration flights that are entrance criteria for that decision, as well as some of the activities necessary to support the start of initial operational test and evaluation.Boeing plans to complete a little more than 8 months of flight testing prior to the low-rate production decision, or about 22 percent of development flight test hours. According to program officials, the vast majority of the KC-46 testing over the next several months will now be spent on demonstrating aerial refueling capabilities—a key data point necessary to hold the low-rate production decision. They also stated that the second Under the revised test schedule, development aircraft—a KC-46 tanker—will be used to support the demonstrations, and any flight test delays of that aircraft will create day for day delays in the program. Flight test delays would also increase schedule risk for later milestones, most notably to the start of operational testing. The revised plan increases concurrency between development test and production, but not significantly. The intent of developmental testing is to demonstrate the maturity of a design and to discover and fix design and performance problems before a system enters production. Our past work has shown that beginning production before demonstrating that a design is mature and that a system will work as intended increases the risk of discovering deficiencies during production that could require substantial design changes and costly modifications to systems already built. The program is also working to resolve other development challenges that pose additional schedule risk to the flight test pace needed to demonstrate aerial refueling capabilities, such as late delivery of parts, software defects, and assumptions related to flight test cycle times. These challenges could result in additional schedule delays. The following is a summary of these development challenges and any steps Boeing is taking to address them. Late delivery of parts for aircraft final assembly: Boeing’s suppliers are having difficulties delivering several key aerial refueling parts. For example, the telescope actuator, which extends and retracts the boom, needs to be redesigned in order to work properly. A redesigned telescope actuator is tentatively scheduled to be delivered in April 2015, enabling the boom that will be used to support the July/August 2015 demonstration flights to be delivered two weeks prior to its June 2015 need date. In another example, the supplier of the wing aerial refueling pod and centerline drogue system is experiencing delays in delivering these subsystems due to design and manufacturing issues with a number of parts. To stay within schedule targets, Boeing and the supplier have developed a plan to complete parts qualification testing and safety of flight testing in parallel. Program officials have said that one of the risks of this parallel approach is that discoveries during safety of flight testing could drive design changes that would then require qualification testing to be re-done. Boeing has sent engineers and other staff to help the aerial refueling suppliers overcome these challenges, and held regular management meetings to stay abreast of the latest developments. Defects in delivered software: Boeing and the program office consider the resolution of software problems as one of the program’s top risks. According to program documentation, open problem reports may have peaked in December 2014, at roughly 780 priority problem reports. Boeing fixed 170 of these problems over the past few months. As of March 2015, however, a little over 600 problem reports were still not resolved, including several hundred that must be addressed prior to the KC-46 first flight, currently planned for June 2015. Many of these problems are related to the military subsystems and either adversely affect the accomplishment of an essential operational or test capability or increase the project’s technical, cost, or schedule risk—and no workaround solution is known. Additional problems may be identified as Boeing integrates the last two software modules related to aerial refueling. Boeing expects to fully integrate these software modules in April 2015, about 10 months later than originally planned. Flight test cycle time assumptions: The program may not be able to meet the established timeframes, or cycle times for flight testing. Both Boeing and the program office regard maintaining the planned flight test rate of 65 hours per month for the 767-2C aircraft and 50 hours per month for the KC-46 aircraft’s military tests as one of the program’s greatest risks. DOD test organizations have shown that the planned military flight test rate is more aggressive than other programs have demonstrated historically. The Director of Operational Test and Evaluation also reported that the test schedule does not include sufficient time to address deficiencies discovered during tests. Despite these concerns, Boeing predicts that it can achieve the flight test rates as it has local maintenance and engineering support and control over the flight test priorities as testing is being conducted at Boeing facilities. Deviations from its proposed flight test cycle times could pose risk to the program’s ability to capture the knowledge necessary to hold the low-rate production decision in October 2015. Boeing provided an updated schedule to the program office in January 2015 that may address some of the risks we highlighted. As part of the updated test plan, the program office and Boeing also revised their approach to conducting operational test and receiver aircraft certification. The new approach re-phases some receiver aircraft certification and shifts test execution responsibility for 10 receiver aircraft from Boeing to the government. This approach may result in adding additional risk to the program should the Air Force fail to complete the testing on time. The new schedule and associated contract modifications are expected to be approved by early 2015. Program officials stated that they are reviewing the information to determine whether they need to further adjust milestone dates, including the low-rate production decision and the start of operational test. That analysis has not yet been completed. Program is Continuing to Gather Manufacturing Knowledge to Support the Production Decision The program office and Boeing continue to collect most of the necessary manufacturing knowledge to make informed decisions as the program approaches its low-rate production decision in October 2015. However, the program is behind in some of these activities, which will make it difficult for program officials to assess the reliability of the aircraft prior to production, and could also mean less efficient production processes. Last year we began reporting on the program’s efforts to capture manufacturing knowledge that is important to make a well informed low- rate production decision. This knowledge is based on practices we identified in previous work that are used by leading commercial companies, including (1) identifying key system characteristics and critical manufacturing processes; (2) establishing a reliability growth plan and goals; (3) conducting failure modes and effects analysis; (4) conducting reliability growth testing; (5) determining whether processes are in control and capable; and (6) testing a production-representative prototype in its intended environment. Table 4 provides a description of these activities and progress the KC-46 program has made for each. Identify key system characteristics and critical manufacturing processes: Since the 767-2C will be manufactured on Boeing’s existing 767 production line, the program office and Boeing have focused their attention on identifying the key system characteristics and critical manufacturing processes for the military unique subsystems. They have identified these processes and completed two prior rounds of assessments in support of the preliminary and critical design reviews. Currently, they are assessing nine critical manufacturing processes for low-rate production, such as the assembly and installation of aerial refueling components. Six of the assessments have been completed, including the assembly and test of the wing aerial refueling pod and centerline drogue system that had been delayed over four months. As mentioned previously, the supplier has had difficulty delivering these subsystems on time due to design and manufacturing issues with a number of parts. The program did not identify any action items for two of the completed assessments. For the other four, Boeing must undertake several actions, including developing a plan on how it would qualify and deliver key parts on time, such as the telescope actuator and the refueling receptacle panel. The remaining three assessments are expected to be completed in early summer 2015. The results were not available for us to analyze during this review. Set reliability growth plan and goals: The program office has established a reliability growth curve and goal. To assess reliability growth, the program is tracking the mean time between unscheduled maintenance events due to equipment failure, which is defined as the total flight hours divided by the total number of incidents requiring unscheduled maintenance. These failures are caused by a manufacturing or design defect and require the use of Air Force resources, such as spare parts or manpower, in order to fix them. The program has set a reliability goal of 2.83 flight hours between unscheduled maintenance events and expects to be at 2 hours at the start of initial operational test and evaluation. Since the first flight dates have slipped on all four development aircraft, the program will likely have less reliability knowledge than planned prior to its low-rate production decision. Conduct failure modes and effects analysis: Boeing has completed the initial failure modes and effects analysis that covers 41 subsystems and plans to update it as flight testing is conducted. Boeing and the program office rely on this analysis to determine which subsystems on the aircraft are likely to fail, when and why they fail, and whether those subsystems’ failures might threaten the aircraft’s safety. Conduct reliability growth testing: Boeing has taken steps to improve the aircraft’s reliability, but is behind in its reliability growth efforts because of delays to the start of development flight testing. Boeing is currently testing prototypes of critical parts, uncovering failures, and incorporating design changes or manufacturing process improvements. For example, program officials told us that Boeing has initiated testing in its labs and facilities and its subcontractors’ equipment must pass specific test procedures to be considered acceptable. Boeing conducted the 767-2C first flight at the end of December 2014, which will allow the program to begin tracking the aircraft’s reliability against the growth curve (i.e., in terms of mean time between unscheduled maintenance). However, this was almost seven months later than planned. Detecting reliability problems early lessens the impact on the development and production programs. In addition, if problems are not detected until after an aircraft has been fielded, it could result in a reduction in mission readiness and higher than expected operations and sustainment costs because the aircraft have to be fixed. Determine processes in control and capable: The program has started activities to determine whether manufacturing processes are in control and capable of producing parts consistently with few defects. The program’s review of critical manufacturing processes for the military unique subsystems involves evaluating several risk areas, including whether processes are in control and capable. Program officials said that they plan on verifying that military subsystem suppliers have procedures in place to collect process control data prior to the low-rate production decision. They told us they do not plan on analyzing that data, but will rely on Boeing for process control management. Test a production-representative prototype in its intended environment: The program has not begun testing a production-representative KC-46 tanker in its intended environment. As discussed previously, the program planned to have 13 months of flight testing between two fully configured KC-46 tankers by the low-rate production decision. Due to program delays, it is likely to complete only three months of flight testing on one fully configured KC-46 tanker prior to this decision. While this increases the risk of discovering costly problems late in development, when the more complex software and advanced capabilities are tested, contract provisions specify that Boeing must correct any required deficiencies and bring development and production aircraft to the final configuration at no additional cost to the government. Conclusions The next several months are critical to Boeing’s ability to demonstrate that the KC-46 can successfully perform its aerial refueling mission and that the company is ready to start producing the aircraft. Based on our analysis, Boeing is at risk of not meeting the entrance criteria needed to support the projected October 2015 low-rate production decision, and will have less knowledge about the reliability of the aircraft than originally planned. The small schedule margin that was built into the program has eroded, largely due to problems Boeing experienced while wiring the aircraft. Only one flight test has been conducted to date on a 767-2C aircraft. No flight testing has been conducted on a KC-46 tanker. The flight test pace it now hopes to achieve to support the low-rate production decision is in jeopardy because of late deliveries of key aerial refueling parts, a large number of software defects that need to be corrected, and optimistic flight test cycle assumptions. Although Boeing should bear the costs of any design or manufacturing problems that may occur, the department must be careful not to hold the low-rate production decision and award a production contract before it has adequate knowledge that the KC-46 can perform its aerial refueling mission. Recommendation for Executive Action Given that the KC-46 program has encountered significant delays to the start of development test and the schedule moving forward remains risky, we recommend that the Secretary of Defense direct the Air Force to ensure that key aerial refueling capabilities are demonstrated prior to the low-rate production decision. Agency Comments The Air Force provided us with written comments on a draft of this report, which are reprinted in appendix II. The Air Force concurred with our recommendation. The KC-46 program office plans to collect all required data prior to the low-rate production decision, including the demonstration of key aerial refueling capabilities. We also incorporated technical comments from the Air Force as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force; and the Director of the Office of Management and Budget. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Status of Prior GAO Recommendations Appendix I: Status of Prior GAO Recommendations GAO report Recommendation GAO-12-366 DOD should (1) closely monitor the cost, schedule, and performance outcomes of the KC-46 program to identify positive or negative lessons learned and (2) develop metrics to track achievement of key performance parameters. Actions taken by DOD Program is in the process of compiling a list of lessons learned and has developed metrics to track the achievement of key performance parameters. GAO-13-258 DOD should (1) analyze the root causes for the rapid allocation of management reserves and (2) improve the KC-46 master schedule so that it complies with best practices. Program is monitoring the use of management reserves on a monthly basis and is in the process of updating the master schedule so that it complies with best practices, such as including all government activities. GAO-14-190 DOD should determine the likelihood and potential effect of delays on total development costs and develop mitigation plans as needed. The program completed its cost and schedule risk assessment in February 2015. Appendix II: Comments from the Department of the Air Force Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Michael J. Sullivan, (202) 512-4841 or sullivanm@gao.gov. Staff Acknowledgments In addition to the contact named above, Cheryl Andrew, Assistant Director; Rodney Bacigalupo; Jeff Hartnett; Katheryn Hubbell; John Krump; Robert Swierczek; and Ozzy Trevino made key contributions to this report.
Aerial refueling—when aircraft refuel while airborne—allows the U.S. military to fly farther, stay airborne longer, and transport more weapons, equipment, and supplies. Yet the mainstay of the U.S. tanker forces—the KC-135 Stratotanker—is over 50 years old. It is increasingly costly to support and its age-related problems could potentially ground the fleet. As a result, the Air Force initiated the $49 billion KC-46 program to replace the aerial refueling fleet. The program plans to produce 18 tankers by 2017 and 179 aircraft in total. The National Defense Authorization Act for Fiscal Year 2012 included provisions for GAO to annually review the KC-46 program through 2017. This report addresses progress made in 2014 towards (1) achieving cost and performance goals, (2) meeting schedule targets, and (3) gathering manufacturing knowledge prior to the low-rate production decision. GAO analyzed key program documents and discussed development and production plans and results with officials from the KC-46 program office, other defense offices, and the prime contractor. The KC-46 acquisition cost estimate has declined by about 5.4 percent from $51.7 billion to $48.9 billion since February 2011 and the program is on track to meet performance goals. Most of the estimated cost decline is due to fewer than expected engineering changes and changes in military construction plans. The Air Force delayed the production decision two months, to October 2015, due to wiring problems that Boeing experienced that delayed aircraft delivery and testing. For example, Boeing completed 3.5 hours of flight testing during a single flight of the 767-2C (a precursor to the KC-46 tanker) in 2014, compared to nearly 400 flight test hours it planned to conduct. With program office approval, Boeing restructured its nearly 2,400 development flight test hour plan to focus on demonstrating key KC-46 aerial refueling capabilities required for the production decision. Significantly less testing will now be conducted prior to the decision and only three test months will be on a KC-46, compared to the original plan of 13 months. This testing is intended to demonstrate design maturity and fix design and performance problems before a system enters production. Boeing remains at risk of not being able to demonstrate the aerial refueling capabilities in time to meet the new production decision date due to late parts deliveries, software defects, and flight test cycle assumptions, which could result in additional delays. Program officials are gathering manufacturing knowledge to support a production decision, such as determining if suppliers can produce military subsystems in a production environment. However, the program office will have less knowledge about the reliability and performance of the KC-46 than planned because of reduced testing prior to the decision. While this increases the risk of discovering costly problems late in development, contract provisions specify that Boeing must correct these at no cost to the government.
Background Mail delivery is central to USPS’s mission and role in providing postal services to “bind the nation together through the personal, educational, literary, and business correspondence of the people.” USPS is required by law to provide prompt, reliable, and efficient services, as nearly as practicable, to the entire U.S. population. USPS is also required to maintain an efficient mail collection system. These and related requirements are commonly referred to as the “universal service obligation.” The PRC has reported that delivery frequency is a key element of universal postal service. Further, provisions in annual USPS appropriations since 1984 mandate 6-day-a-week delivery and rural mail delivery at certain levels. The frequency with which customers receive mail service from USPS has evolved over time to account for changes in communication, technology, transportation, and postal finances. As recently as 1950, residential deliveries were made twice a day. It is useful to note that while most current customers receive delivery service 6 days a week, some customers receive 5-day or even 3-day-a-week delivery, including businesses that are not open 6 days a week; resort or seasonal areas not open year-round; and areas not easily accessible, some of which require the use of boats, airplanes, or trucks to deliver the mail. Mail delivery accounted for nearly one-third of USPS’s $76 billion in expenses in fiscal year 2010. That year, USPS recorded compensation and benefit costs of nearly $23 billion and 586 million work hours for its more than 310,000 full- and part-time mail carriers. Delivery is labor intensive and includes carriers manually sorting certain mail into the sequence in which it will be delivered and delivering mail to, and collecting it from, nearly 131 million residential and business addresses. Moreover, this delivery network has grown on average by over 1 million addresses each year over the last 5 years, resulting in additional personnel, fuel, and vehicle costs. Declines in volumes, however, challenge USPS’s ability to make corresponding cost reductions because some activities—such as providing 6-day delivery—continue regardless of mail volume. Former Postmaster General John Potter testified in January 2009 that 6- day delivery was becoming unaffordable and requested that Congress not include the restriction in annual appropriations legislation so that USPS could move to a 5-day delivery schedule. USPS had been unable to cover its costs, many of which were difficult to cut, as mail volumes and revenues were declining. USPS developed a cross-functional, internal task force composed of experts across various operational areas to examine a variety of topics, including the economic value, financial and operational impacts, and business risk of eliminating a day of delivery. The task force solicited input from the public, mailing industry, and postal unions and management associations. Based on this work, the task force prepared USPS’s March 2010 5-day delivery plan, which was approved by USPS’s Board of Governors. Table 1 summarizes highlights of this plan. The plan recognizes that certain customers may be negatively affected by the proposed reduction in service; however, USPS believes the reduction is necessary to improve its long-term financial viability. In a March 2, 2011, testimony, the current Postmaster General Patrick Donahoe reiterated USPS’s position on 5-day delivery, stated there is no longer enough mail volume to support a 6-day delivery model, and said that moving to a 5-day schedule would strengthen USPS’s future. USPS estimated that if 5-day delivery had been in effect in fiscal year 2009, it would have realized $3.1 billion in net savings—including $3.3 billion in gross savings less $0.2 billion in revenue losses due to a slight volume decline. Of these savings, $2.7 billion would have resulted from reducing work performed by city and rural carriers. City carriers employed specifically to handle delivery on the sixth day, as well as other part- and full-time city carriers used to replace regular carriers on the sixth day, would no longer be needed. USPS plans to achieve these work-hour reductions by attrition, involuntary separations, or other strategies to reduce work hours. Aside from these city-carrier savings, other savings would be achieved by eliminating rural-carrier work hours that would no longer be needed on Saturday, as well as other associated vehicle, supervisor, and transportation costs. USPS’s estimate of gross savings was based on “full-up savings,” which assumed that the transition to 5-day delivery, including the associated operational and workforce changes, had been fully implemented. USPS has not reported how many years this transition would take, but has noted that it would include realigning USPS’s workforce through attrition and other changes. Tables 2, 3, and 4 summarize USPS’s gross savings, mail volume, and net revenue estimates, including the supporting methodology and assumptions. These assumptions, methodologies, and conclusions have generated a significant amount of debate since USPS requested an advisory opinion from PRC in March 2010. USPS is required to request a PRC advisory opinion on a service change when USPS determines that there should be a change in the nature of postal services that will generally affect service on a nationwide basis. The resulting PRC public proceeding has provided a detailed record that includes, among other things, USPS’s proposal and its supporting evidence; input provided by interested parties through oral and written testimony and other supporting evidence; and various statements and rebuttals submitted by USPS and interested parties. PRC issued its nonbinding advisory opinion on March 24, 2011; however since our audit work was completed in early March, this opinion was not included in this report. USPS Should Achieve Significant Cost Savings from 5-Day Delivery, but the Extent of Savings Will Depend on How Well the Change Is Implemented Cost Savings from 5-Day Delivery Are Likely to Be Significant, but Some Savings Will Depend on Increasing City-Carriers’ Productivity USPS is likely to achieve significant cost savings by reducing delivery from 6 days to 5 days because it would eliminate costs associated with carriers traversing their routes on Saturday. During the PRC proceeding, however, stakeholders raised a variety of criticisms about the assumptions, methodology, and conclusions USPS used to determine its cost-savings estimate. Exchanges have occurred between the stakeholders and USPS about these criticisms. Our analysis of these exchanges and our independent analysis identified a key area of concern that may have a considerable impact on USPS’s estimate of net financial savings—how efficiently USPS can absorb the cost of its workload transferred from Saturday to weekdays. USPS assumed that most of the Saturday workload transferred to weekdays would be absorbed through more efficient operations. According to USPS, this assumption was based on operational judgments that excess capacity in city-delivery operations would be used to accommodate this shifted mail volume because, among other things, the time associated with certain carrier activities (i.e., loading mail into mailboxes) could be performed with greater mail volumes without much increase in work hours. USPS also assumed that a significant portion of its delivery operations would not be substantially affected because this work must be performed regardless of mail volumes (e.g., driving the route). USPS stated it expects that absorbing shifted mail volumes would raise city-carrier productivity, and in doing so, would reverse the productivity declines experienced from shrinking mail volumes. Specifically, USPS has reported that city-carrier productivity has decreased in recent years as mail volume has declined. USPS stated that in a 5-day environment, with Saturday volume transferred to weekdays, higher carrier productivities would be reached. The National Association of Letter Carriers (NALC), which represents city carriers, expressed concerns that this higher city- carrier productivity may not be realized. Specifically, it stated that recent route adjustment efforts have resulted in routes that have little or no excess capacity to absorb the transferred volumes. If certain Saturday city- carrier delivery work hours were not absorbed under 5-day delivery (when mail is transferred to other days of the week), USPS estimated the cost savings would be reduced by up to $500 million. To the extent to which USPS can absorb this Saturday workload, city-carrier productivity and cost savings should increase. We believe that USPS could realize some increase in city carriers’ productivity under 5-day delivery, but the extent to which such an increase would be realized and workload from Saturday to weekdays would be absorbed between Monday and Friday would depend on several factors. These factors would include how effectively USPS implemented its 5-day delivery plans and managed its city carriers and their workload based not only on 5-day delivery, but also on other actions to increase delivery efficiency, such as adjusting delivery routes and deploying flat-mail sorting equipment. In addition, the Greeting Card Association raised concerns during the PRC proceeding about the compensation rates and attrition levels USPS used in calculating its initial work-hour savings. USPS acknowledges it will not achieve its estimated savings from 5-day delivery within the first year of implementation, in part because some savings are based on realigning USPS’s workforce over time through attrition and other changes. USPS can use workforce strategies to support its workforce transition, such as not filling vacancies before implementing 5-day delivery or offering early- out retirement. USPS also has flexibility to reduce hours for different types of noncareer part-time employees whose wages vary widely, but are generally less than the wage levels USPS assumed in its cost-savings estimates. The timing and amount of savings achieved would be influenced by how these strategies were employed and when eligible employees decided to retire. Five-Day Delivery Will Likely Result in Volume Loss and Concerns Have Been Raised that USPS May Have Understated these Losses USPS and many others agree that 5-day delivery will produce some volume loss as mailers and customers see service reduced with the elimination of Saturday delivery. Concerns have been raised, however, that USPS may have understated the potential volume losses based on questions about the methodology USPS used to develop its estimate and the impact of other factors that can affect mail volumes. During the PRC proceeding, for example, some stakeholders raised questions about USPS’s methodology for adjusting mailers’ estimates of how 5-day delivery would affect their mail volumes. USPS adjusted these estimates in one direction which could lead to estimates that would underestimate potential volume loss in a 5- day environment. For example Example using USPS methodology. If a business mailer estimated (1) a 50,000-piece volume loss and (2) it was 50 percent likely to modify its mail volume under 5-day delivery, USPS adjusted the mailer estimate and incorporated a 25,000-piece volume loss into its analysis. Alternative methodology. If a business mailer estimated (1) a 50,000-piece volume loss and (2) it was 50 percent likely to modify its mail volume under 5-day delivery, it may have been more appropriate to adjust the estimated loss in both directions. Under this methodology, the analysis could incorporate losses of 25,000 and 75,000 pieces. Alternative methodology. Make no adjustments and incorporate the mailer estimates at face value. Although USPS indicated that its method of adjusting mailer estimates was a standard industry practice, decision makers may find it useful to have an analysis that reflects adjustments in each direction or just takes the mailer estimates at face value. In addition, some stakeholders expressed concern about potential bias in USPS’s mailer surveys supporting its estimate of a 0.71 percent volume loss resulting from a shift to 5-day delivery—some said that the questionnaire language elicited overstatements of volume loss, while others said that it elicited understatements of volume loss. USPS stated that its estimate likely overstated the volume and revenue losses that would result from its 5-day delivery proposal because respondents were asked about service cutbacks that USPS subsequently dropped after conducting the survey. According to NALC, however, aspects of the questionnaire language encouraged respondents to understate the degree to which 5-day delivery would affect their future mail volumes. USPS responded that the questionnaire was designed to address potential biases and was carried out in a manner that avoided them. In addition to these methodological issues, information collected from our survey of mailer groups in the summer of 2010 suggests that these groups are somewhat uncertain about how 5-day delivery would affect future mail volumes. For example, when asked “do the members of your organization expect USPS’s proposal for 5-day delivery to affect the mail volumes that they or their customers send?” 51 percent of respondents answered “yes,” 29 percent “no,” and 20 percent “not sure” (see app. II for survey results). Five-day service is only one factor among many others, such as postage prices, the availability of electronic alternatives, competition, the cost-effectiveness of mail, quality of service, and ease of use that residential and business customers consider as they plan their current and future use of mail. Changes in any of these key factors may affect future mail volume, particularly given the uncertainty about the effects of 5-day delivery. Addressing Operational Concerns Would be Key To Successfully Im plementing 5-Day Delivery Implementing 5-day delivery would require USPS to realign its operations network to increase efficiency, maintain service, and address man y of the operational issues raised by stakeholders. To its credit, USPS has dedicated a substantial amount of staff time and other resources to s the staff, system, network, and operational realignments needed to transition to a 5-day environment and has developed communication and implementation plans to support this effort. Risks remain, however, tha could threaten US financial targets. PS’s ability to implement these plans and achieve its Stakeholders have raised a variety of operational issues about USPS’s proposal, such as the extent to which USPS will be able to maintain high- quality service; align its processing, retail, and delivery network with changes in workload; and communicate changes with stakeholders. These operational issues would have a direct impact on its ability to capture its estimated cost savings and retain volumes. To address these issues, USPS has developed ommunication plans for internal and external stakeholders, including employees, mailers, and the public, and escribe the major tasks, milestones, and time frames needed to transition implementation plans across functional areas, such as delivery, transportation, mail processing, and post office operations. Th d to 5-day delivery within 6 months of congressional approval. Because USPS field officials will play a major role in making these changes, USPS headquarters officials are working with field-staff representatives to develop implementation guidance. This guidance, wh lized in March 2011, will include checklists of key steps that is to be fina field managers will have to complete to be adequately prepared for the transition. Five-Day Delivery Would Provide Cost Savings, but Additional Restructuring Actions Are Also Needed Five-Day Delivery Involves Difficult Trade-offs USPS’s 5-day delivery proposal involves both positive and negative effects. Key benefits would include improving USPS’s financial condition by reducing costs; reducing the size of its workforce; and increasing efficiency by better aligning delivery operations with reduced mail volumes. On the minus side, it would reduce service; put mail volumes and revenues at risk; eliminate jobs; decrease USPS advantages over competitors that do not have Saturday delivery; and, by itself, be insufficient to solve USPS’s financial challenges. Table 6 summarizes these and other key trade-offs. Moving to 5-day delivery would not, by itself, resolve USPS’s considerable financial challenges. We have reported that a variety of actions are needed for USPS to adapt more quickly to changes in the public’s use of mail and to achieve financial viability. USPS’s 5-day proposal should be considered in the context of other restructuring strategies both within and outside the delivery network. In April 2010, we discussed strategies and options across multiple operational and financial areas targeted at reducing operational costs and improving efficiency, including the following: Delivery operations—expand use of more cost-efficient delivery, such as cluster boxes. Retail operations—optimize the retail facility network, move more retail services to private stores and self-service locations, and close unneeded facilities. Mail processing operations—close unneeded facilities or relax delivery standards to facilitate closures or consolidations. We have also reported on other actions that could improve USPS’s overall financial condition, including the following revising retiree health benefits funding requirements, reducing compensation and benefit costs, and generating revenues through product and pricing flexibility. Mailer, business, and public use of the mail is changing as technology and alternatives evolve. Five-day service is one factor among many others, such as postage prices, the availability of electronic alternatives, competition, the cost-effectiveness of mail, quality of service, and ease of use that residential and business customers consider as they plan their current and future use of the mail. Through our survey of mailer groups and major postal labor unions and management associations, conducted in the summer of 2010, we found that they were divided over the merits of 5- day delivery, with the groups offering diverse perspectives on the potential effects on mail volume and members’ finances. Asked about USPS’s proposal for 5-day delivery, about half of those surveyed did not express an overall view, with some explaining that their members had differing views on the matter. Of those surveyed, 22 percent favored USPS’s proposal, including some who said it would lower USPS’s costs and thus help keep rates down. Some other proponents qualified their support, such as by stating they would favor 5-day delivery only as part of a package of changes to improve USPS’s financial condition. Twenty-eight percent opposed USPS’s proposal, often stating it would reduce mail volume and harm USPS’s business. Some mailer groups said the change would negatively affect customers and increase their costs, while others said they would prefer other options to improve USPS’s financial condition. Postal labor unions and management associations raised similar objections. One employee organization expressed concern that 5-day delivery would negatively affect vulnerable populations, such as rural residents, the homebound, the elderly, and small businesses that depend on the mail. Another employee organization expressed concerns that 5-day delivery would negatively affect Saturday retail service and efficiency, as well as reduce support for USPS and its monopoly on delivering mail to mailboxes. See appendix II for tabulations of the responses to our survey. A Decision on 5-Day Delivery Should Be Made in Conjunction With Additional Restructuring Actions The status quo is unsustainable as USPS is unable to finance its current operations and service levels. Action by Congress and USPS is urgently needed to comprehensively restructure USPS’s operations, networks, and workforce and to modernize its organization. We recently reported that Congress and USPS urgently need to reach agreement on a package of actions to restore USPS’s financial viability and enable it to begin making necessary changes. We also stated that Congress should consider any and all options available to reduce USPS’s costs, and that one option for Congress is to not include appropriations language requiring 6-day delivery. When fully implemented, 5-day delivery would provide USPS with needed cost savings, although the extent of those savings is uncertain. Additional uncertainties remain as factors other than delivery frequency— e.g., price increases—can also affect mail volumes and revenues. USPS’s role in providing universal postal services can affect all American households and businesses, so fundamental changes to universal postal services involve key public policy issues for Congress to decide. Some questions for Congress to consider include the following: What aspects of universal postal service, including frequency of mail delivery, are appropriate in light of fundamental changes in the use of mail? How much postal service does the nation need and what restructuring of USPS’s operations is needed for USPS to become more efficient, cover its costs, keep rates affordable, and meet changing customer needs? What incentives and oversight mechanisms are needed to ensure an appropriate balance between providing USPS with more flexibility and ensuring sufficient transparency, oversight, and accountability? Congressional decision-making actions on USPS’s 5-day delivery proposal will be informed not only by this report but also by PRC’s public proceedings and advisory opinion on the proposal. If Congress decides 5- day delivery is necessary, Congress and USPS could factor the savings from 5-day delivery into deliberations about what package of actions should be taken to restore USPS’s financial viability. Conversely, if Congress maintains the mandate for 6-day delivery, Congress and USPS would need to find other ways for USPS to achieve other substantial financial savings. This would entail difficult decisions with implications for USPS’s infrastructure, workforce, and service. USPS’s financial crisis is nearing a tipping point as USPS anticipates having insufficient cash at the end of fiscal year 2011 to meet all of its obligations, as well as reaching its $15 billion statutory debt limit. Addressing USPS’s financial viability is critical, since USPS plays a vital role in the U.S. economy and provides postal services to all communities. Because GAO previously recommended that Congress consider providing USPS with financial relief, and in doing so, consider all options available to reduce costs, this report contains no new recommendations. Agency Comments and Our Evaluation We provided a draft of this report to USPS for review and comment. USPS provided comments in a letter from the Vice President, Corporate Communications, dated March 18, 2011. These comments are presented in appendix III. USPS also provided technical comments, which we incorporated where appropriate. USPS generally agreed with our findings, but provided additional context for these findings. USPS said that the net cost savings from ending Saturday delivery would be significant given its operations and cost structure. USPS also stated that it agrees with the findings in our report that said there are always risks and uncertainties in modifying information systems. However, USPS noted that most of its information systems that need to be modified to support 5-day delivery have already passed rigorous testing and there is no appreciable risk or uncertainty relating to these systems. Finally, USPS commented that a change to 5-day delivery is needed because there is no longer sufficient volume to sustain 6-day delivery, and that moving to 5-day delivery is one of the fundamental business model changes needed to help it close the gap between revenues and costs. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, the Chairman of the Postal Regulatory Commission, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or herrp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of our work were to assess (1) the U.S. Postal Service’s (USPS) cost and volume estimates and the operational impacts associated with its 5-day delivery proposal and (2) the trade-offs and other implications associated with this proposal. To assess these estimates and impacts, we reviewed documents and testimonies from a wide variety of postal stakeholders, including USPS, the Postal Regulatory Commission (PRC), mailers, unions, management associations, economists, and concerned citizens. Much of this information was submitted as part of PRC’s review of USPS’s proposal to move to 5-day delivery. We also reviewed USPS’s action plan entitled Ensuring a Viable Postal Service for America: An Action Plan for the Future; a 5-day delivery plan entitled Delivering the Future: A Balanced Approach, Five-Day Delivery Is Part of the Solution; operational-specific communication and implementation plans; and other financial and operational information contained in USPS’s annual reports, integrated financial plans, and comprehensive statements. We also reviewed reports from PRC, the USPS Office of Inspector General, the Congressional Research Service, the Congressional Budget Office, and the National Commission on Fiscal Responsibility and Reform; relevant congressional hearings and testimonies; laws requiring 6-day delivery; pending postal reform legislation; and our past work. We interviewed officials from USPS, including officials on its 5-day delivery task force, postal labor unions, and postal management associations. In conducting our analysis of the estimates and operational impacts of 5- day delivery, we also reviewed the assumptions and methodologies that USPS used as the foundation for its conclusions and estimates for cost reduction and mail-volume impact. We reviewed the multiple exchanges before PRC about issues or criticisms that stakeholders had raised about USPS’s assumptions, methodology, operational issues, and conclusions and USPS’s corresponding responses. In assessing the estimates themselves and these various exchanges, we considered the following criteria for conducting these types of reviews: the magnitude of criticism on the overall estimate; the internal consistency; consistency with social science best practices; the reasoning or support behind the methodology; and the presence of agreement and/or disagreement among stakeholders. After applying these criteria, we further analyzed two issues that may significantly affect USPS’s net financial savings estimate—(1) USPS’s assumption and judgment that most of the Saturday workload transferred to weekdays would be absorbed through more efficient city delivery operations and (2) USPS’s methodology for estimating the effect of its 5- day delivery proposal on mail volume, including reducing mailers’ estimates based on their responses to the question asking about the likelihood that their mail volumes would change if USPS implemented its proposal. We surveyed 65 selected stakeholders in the summer of 2010 about their views on USPS’s proposal for 5-day delivery, including implementation issues, concerns, and questions; their views on whether the implementation of USPS’s 5-day proposal would affect mail volume and mailers’ financial conditions; and their overall position on USPS’s 5-day proposal. Stakeholders we identified included all seven major postal labor unions and management associations and 58 mailing industry associations that either (1) formally participated in PRC’s proceeding on 5-day delivery or (2) were members of the Mailers Technical Advisory Committee (MTAC). MTAC is a venue for USPS to share technical information with mailers and to receive advice and recommendations from mailers on matters concerning mail-related products and services. MTAC members include mailer associations and other associations and organizations related to the mailing industry. MTAC organizations represent major segments of the mailing industry and members of MTAC organizations that generate most mail volume. However, MTAC organizations are not statistically representative of all organizations in the mailing industry and, therefore, our survey results cannot be generalized to the industry as a whole. We received responses from five major postal labor unions and management associations and 45 mailing industry associations, for a combined response rate of about 77 percent. Our work to assess the trade-offs and other implications associated with USPS’s 5-day delivery proposal was based on analyzing the evidence collected for this engagement and our past work on restructuring options and strategies for USPS. We conducted this performance audit from May 2010 to March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of GAO’s Survey of Mailer and USPS Employee Groups Tabulations of the 50 responses to our survey from five major postal labor unions and management associations and 45 mailing industry associations follow. Some questions in the survey were asked of all organizations; other questions applied only to mailer groups. Not sure Mailer Group Quetion: Do the memer of yor orgniztion expect USPS' proposal for 5-dy delivery to ffect the mil volme tht they or their customer end? Not sure Mailer Group Quetion: How wold USPS' proposal for 5-dy delivery ffect the finncil condition (i.e., co nd/or reven) of yor memer? No wer Mailer/USPS Employee Group Quetion: Doe yor orgniztion fvor, oppoe, or hve no poition on USPS' proposal to redce delivery to 5 d week? Appendix III: Comments from the United States Postal Service Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, Teresa Anderson, Brittany Alfonso-Guerrero, Joshua Bartzen, Patrick Dudley, Heather Frevert, Brandon Haller, Kenneth John, Hannah Laufe, and Crystal Wesco made key contributions to this report. Related GAO Products High-Risk Series: An Update. GAO-11-278. February 2011. U.S. Postal Service: Foreign Posts’ Strategies Could Inform U.S. Postal Service’s Efforts to Modernize. GAO-11-282. February 16, 2011. U.S. Postal Service: Legislation Needed to Address Key Challenges. GAO-11-244T. December 2, 2010. U.S. Postal Service: Action Needed to Facilitate Financial Viability. GAO-10-601T. April 22, 2010. U.S. Postal Service: Action Needed to Facilitate Financial Viability. GAO-10-624T. April 15, 2010. U.S. Postal Service: Strategies and Options to Facilitate Progress toward Financial Viability. GAO-10-455. April 12, 2010. U.S. Postal Service: Financial Crisis Demands Aggressive Action. GAO-10-538T. March 18, 2010. U.S. Postal Service: Financial Challenges Continue, with Relatively Limited Results from Recent Revenue-Generation Efforts. GAO-10-191T. November 5, 2009. U.S. Postal Service: Restructuring Urgently Needed to Achieve Financial Viability. GAO-09-958T. August 6, 2009. U.S. Postal Service: Broad Restructuring Needed to Address Deteriorating Finances. GAO-09-790T. July 30, 2009. High-Risk Series: Restructuring the U.S. Postal Service to Achieve Sustainable Financial Viability. GAO-09-937SP. July 28, 2009.
The U.S. Postal Service's (USPS) financial outlook has deteriorated as customers have shifted to electronic alternatives. Mail volumes have declined over 20 percent since fiscal year 2006 and are expected to continue declining. To help its financial outlook, in March 2010, USPS presented a detailed proposal to the Postal Regulatory Commission (PRC) to move from a 6-day to a 5-day delivery schedule. USPS projected this would save about $3 billion annually and reduce mail volume by less than l percent. This proposal factors in widespread changes to USPS's workforce and networks. USPS has also asked Congress to not include language in its annual appropriation requiring 6 day-a-week delivery. As requested, GAO assessed (1) USPS's cost and volume estimates and the operational impacts associated with its 5-day delivery proposal and (2) the trade-offs and other implications associated with this proposal. GAO reviewed USPS's proposal (including its assumptions, methodologies, and conclusions) and other information from the PRC's 5-day delivery proceeding, surveyed postal employee and mailer groups, and interviewed USPS officials and postal employee groups. Because GAO previously recommended that Congress consider providing financial relief to USPS, as well as other cost-saving options, this report contains no new recommendations. USPS generally agreed with GAO's findings and provided additional context on its proposed change to end Saturday delivery. USPS's proposal to move to 5-day delivery by ending Saturday delivery would likely result in substantial savings; however, the extent to which it would achieve these savings depends on how effectively this proposal is implemented. USPS's $3.1 billion net cost-savings estimate is primarily based on eliminating city- and rural-carrier work hours and costs through attrition, involuntary separations, or other strategies. USPS also estimated that 5-day delivery would result in minimal mail volume decline. However, stakeholders have raised a variety of concerns about USPS's estimates, including, (1) First, USPS's cost-savings estimate assumed that most of the Saturday workload transferred to weekdays would be absorbed through more efficient delivery operations. If certain city-carrier workload would not be absorbed, USPS estimated that up to $500 million in annual savings would not be realized. (2) Second, USPS may have understated the size of the potential mail volume loss due to questions about the methodology USPS used to develop its estimates of how 5-day delivery may affect mail volumes. The extent to which USPS can achieve cost savings and mitigate volume and revenue loss depends on how well and how quickly it can realign its operations, workforce, and networks; maintain service quality; and communicate with stakeholders. USPS has spent considerable time and resources developing plans to facilitate this transition. Nevertheless, risks and uncertainties remain, such as how quickly it can realign its workforce through attrition; how effectively it can modify certain finance systems that cannot be changed until congressional approval for 5-day delivery is granted; and how mailers will respond to this change in service. Further, uncertainties remain as factors other than delivery frequency--e.g., price increases--can also affect mail volumes and revenues. USPS's proposal involves several factors that need to be considered. It would improve USPS's financial condition by reducing costs, increasing efficiency, and better aligning its delivery operations with reduced mail volumes. However, it would also reduce service; put mail volumes and revenues at risk; eliminate jobs; and, by itself, be insufficient to solve USPS's financial challenges. USPS's role in providing universal postal services can affect all American households and businesses, so fundamental changes involve key public policy decisions for Congress. If Congress decides 5-day delivery is necessary, then Congress and USPS could factor the savings into deliberations about what package of actions should be taken to restore USPS's financial viability. Conversely, if Congress maintains the mandate for 6-day delivery, Congress and USPS would need to find other ways to achieve equivalent financial savings, so that the package is sufficient to restore USPS's financial viability. This would likely entail difficult decisions with broad implications for USPS's infrastructure, workforce, and service. As GAO has reported, a package of actions by Congress and USPS is urgently needed to modernize USPS's operations, networks, and workforce.
Background NLS operates a program that provides free reading materials for residents and citizens of the United States and its territories as well as U.S. citizens residing abroad who are generally unable to read standard print because of a visual or physical disability. Under its authorizing statute, the program may provide reading materials in braille, audio, and other formats, and since the 1940s, may provide devices for reproducing audio recordings. The types of materials that NLS provides include books, magazines, music scores and materials for music instruction. In addition, NLS users have access to over 400 state, national, and international audio and braille newspapers through Newsline, a telephone and internet-based service. The Free Matter for the Blind and Other Physically Handicapped Persons program, administered by USPS, assists NLS in circulating materials to its users. In fiscal year 2016, USPS had a budget of approximately $55.1 million for free delivery of mail in the NLS program and certain other purposes. USPS delivered 43.9 million pieces of mail through the program during fiscal year 2015. NLS’s Structure and Administration NLS is within the LOC’s Office of National and International Outreach, under LOC’s organizational structure effective Oct. 1, 2015 (see fig. 1). NLS receives its own congressional appropriation; however, LOC oversees the budget and activities of NLS and approves its budgeting decisions. For instance, if NLS’s budgetary plan includes investing in a new technology initiative, it must submit a proposal for approval by LOC’s Information Technology Steering Committee. LOC also oversees NLS’s strategic planning process. NLS is currently in the process of developing its first comprehensive strategic plan, which NLS officials stated will be completed in fiscal year 2016. LOC will review this plan to ensure it aligns with LOC’s overall strategic plan. In addition, LOC has oversight of NLS through processes such as monitoring, checks of internal control procedures, and performance management. Administration of the NLS program is shared between NLS headquarters and a national network of libraries and outreach centers. Headquarters is located in Washington, D.C., and its staff’s functions and responsibilities include selection and production of reading materials, procurement of playback equipment, establishment of standards and assurance of quality products and services, and development, maintenance, and circulation of the specialized music collection. In addition, NLS relies on a network of 101 regional and sub-regional libraries and outreach centers to implement the program. Most states have one regional library participating in the network that is operated by the state or other entity. Some states also have sub-regional libraries that coordinate with the regional libraries to serve a specific geographical area, and are generally operated by public libraries. NLS network libraries conduct outreach to potential users; screen applicants for eligibility; provide customer service to users such as assistance with selecting an appropriate NLS device and identifying preferred reading materials; store, maintain, and circulate NLS books and machines; and report to NLS on equipment, books, and users. The operating costs for these activities and services are not funded by NLS but rather by state, local, and other sources. Populations Eligible for NLS Under LOC regulations, the following four categories of individuals are eligible to access the NLS program: This refers to persons whose visual acuity is 20/200 or less in the better eye with correcting glasses or who have a restricted field of vision. This refers to persons whose visual disability with correction prevents the reading of standard printed material. This refers to persons who are unable to read or unable to use standard printed material because of physical limitations. Reading Disability Resulting From Organic Dysfunction This refers to persons who have a reading disability resulting from organic dysfunction that is severe enough to prevent them from reading printed material in a normal manner. An NLS factsheet states such reading disabilities may include dyslexia, attention deficit disorder, autism, or developmental disabilities. Under the LOC regulations, a competent authority is required to determine the eligibility of all potential users. In cases of blindness, or visual or physical disabilities, a variety of professionals are permitted to certify an individual’s eligibility, including doctors of medicine, registered nurses, therapists, social workers, and certain hospital staff, among others. In the absence of any of the competent authorities listed in the regulation, a professional librarian may approve eligibility. By contrast, in the case of those with a reading disability, the competent authority must be a doctor of medicine who may consult with colleagues in associated disciplines. Estimates of the blind and visually disabled population vary widely, and the precise number who may be eligible for NLS is unknown. Estimates for this population are often based on self-reported information and rely on different definitions of blindness and visual disability. For example, according to the National Health Interview Survey (NHIS), in 2012 there were approximately 21 million adults ages 18 and older who reported that they had “trouble seeing.” However, according to the American Community Survey (ACS), in 2013 there were approximately 7 million adults ages 18 and older who reported that they were blind or experienced “serious difficulty seeing.” It is also difficult to estimate the number of people who would potentially qualify for the NLS program based on reading and physical disabilities. With regard to reading disabilities, a National Center for Learning Disabilities (NCLD) report estimated that in 2012 there were approximately 2.4 million public school students who qualified for special education programs under the Individuals with Disabilities Education Act (IDEA) based on learning disabilities, and many of these students had reading disabilities specifically. In addition, according to the Survey of Income and Program Participation in 2010, there were at least 3.5 million adults ages 18 and older with learning disabilities, including reading disabilities such as dyslexia. Regarding physical disabilities, NLS officials said that the wide range in the types and severity of potentially qualifying conditions and the lack of centralized data make it difficult to estimate the population of potentially eligible users of the NLS program. NLS Users Are Primarily Older Adults with Visual Disabilities, but NLS Efforts Are Not Ensuring Full Access and Awareness Older Adults with Visual Disabilities Make Up the Majority of Users, though an Eligibility Requirement May Limit Access of Potential Users with Reading Disabilities In fiscal year 2014, about 430,000 individuals used the NLS program, with the majority being older individuals who were blind or had other visual disabilities. The majority of NLS users were aged 60 and over (about 70 percent), with almost 20 percent at least 90 years of age (see fig. 2). In addition, almost 85 percent of NLS users were either blind or had other visual disabilities resulting in their inability to read standard print (see fig. 3). NLS officials told us that the majority of users have age-related vision loss and therefore did not qualify for services until later in life. About 6 percent of NLS users had physical disabilities, which include multiple sclerosis and Parkinson’s disease, according to officials we spoke with from network libraries. Another nearly 6 percent of users had reading disabilities. NLS guidance explains, and network library officials corroborated, that users’ reading disabilities generally include dyslexia, autism, and traumatic brain injuries. In part because of their older age, many users have physical dexterity issues which compound their other disabilities, according to NLS officials. Although NLS does not track users’ mobility or dexterity limitations as part of its annual data collection efforts, a survey of users and non-users NLS contracted for in 2013 indicated that almost half of users had limited mobility, and about a third had problems with manual dexterity. NLS’s 2013 survey of users and non-users indicated that NLS users generally have retired from employment or are unemployed, have low or fixed incomes, and are more likely to live alone than non-users. In addition, 13 percent of the user respondents reported having served in the military. The number of NLS users remained stable from fiscal year 2010 through fiscal year 2014, according to NLS data. NLS officials said they estimated about 10 percent turnover in their users each year. While they recruit new users, they said the number of older users who die each year generally results in the number of users staying about the same. Although NLS does not project user estimates for future years, the proportion of the U.S. population age 65 and older is expected to increase from 13 percent in 2010 to more than 20 percent in 2050, which may increase the number of NLS’s older users. While NLS serves individuals with a range of disabilities, an eligibility requirement specific to individuals with reading disabilities may hinder this group of potentially eligible users from accessing services. Specifically, the regulatory requirement that only doctors of medicine may certify a reading disability was cited as a barrier to services by staff with whom we spoke at 5 of the 8 network libraries, 2 organizations that provide similar services to NLS, and 2 organizations specializing in learning disabilities. This eligibility requirement, which originated in 1974 and has remained largely unchanged since, creates additional steps and costs for applicants with reading disabilities in comparison to other groups, and may hinder some individuals’ access to services. For example, officials we spoke with from a network library said that many of their potential users have little money and live in rural areas that are far from doctors, which limits their ability to get the necessary certification. Furthermore, a medical diagnosis is not necessary to determine if an individual has a reading disability, according to a number of groups we interviewed and the policies of other organizations that support people with these disabilities. According to staff we spoke with at two organizations specializing in learning disabilities, and 6 of the 8 network libraries, special education teachers and school staff are typically also knowledgeable about reading disabilities. Recognizing this, certification of reading disabilities is conducted by non-medical personnel for other disability services. For example, under IDEA educational services are provided to eligible children with disabilities, including learning disabilities, which may affect reading. However, IDEA does not require a doctor’s certification of eligibility; this determination is instead made by the child’s parents and a special education team generally comprised of the child’s teacher and at least one other person qualified to conduct diagnostic examinations of children, such as a school psychologist or remedial reading teacher. In addition, two private organizations that, similar to NLS, provide individuals with alternatives to standard print materials, use LOC regulations as guidance to determine the eligibility of individuals with disabilities except for reading disabilities. These organizations instead allow individuals who are deemed competent authorities for the other LOC eligibility categories by the organizations to certify for reading disabilities. Although the eligibility requirement for those with reading disabilities may be inconsistent with other federal policies and with some entities’ current practices, and potentially hinder access to services, NLS does not plan any modifications. Network libraries have formally recommended to NLS that it re-visit the requirement that a doctor certify the eligibility of those with reading disabilities. This is also consistent with our previous recommendations that agencies providing disability benefits and services should ensure they use up-to-date medical criteria, which reflect advances in medicine and technology and include consideration of non- medical evidence. NLS officials said that changing the eligibility requirement for reading disabilities may lead to more users and increased costs. Two other organizations that provide similar services saw an increase in the number of users after they changed their certification requirements so non-medical personnel could certify eligibility. NLS has not estimated the potential demand for its services by those with reading disabilities, and so the actual effect on NLS services from revising the eligibility requirement is unknown. NLS Is Initiating New Outreach Efforts, but Does Not Collect Information Needed to Evaluate Their Effectiveness NLS’s current users likely represent a small percentage of those eligible, but NLS has initiated new efforts to increase awareness and usage of its services. In 2014, NLS developed a plan for improving and expanding its outreach efforts. This plan is based, at least in part, on the recommendations from the 2013 survey of NLS users and non-users. The efforts may help address outreach challenges reported to us by staff at the eight network libraries, including limited nationwide awareness, a lack of information in accessible formats, difficulty reaching the wide variety of potentially eligible populations, and a lack of guidance provided network libraries. NLS’s efforts to improve outreach include: Increasing electronic recruitment methods: NLS has established additional electronic resources, including website announcements and advertisements through social media. For example, NLS developed a Facebook page and is developing a new website. These changes may increase nationwide awareness of services, which staff at 5 of the 8 libraries told us was needed. Producing more information in accessible formats: NLS is developing videos for its website as well as talking guides on how to use its services. These guides are being developed specifically for older individuals to explain processes step by step. Previously, information on services was mostly provided via brochures and posters. Fostering more partnerships: NLS is increasing communications with other organizations that serve its eligible populations. In October 2014, NLS sent e-mails to 300 organizations identified as serving people who may be eligible for its services, with the goal of partnering with these organizations and conducting outreach through them. According to officials, 150 organizations responded to this email and agreed to work with NLS. For example, veterans service organizations agreed to ensure veterans are informed about the program and encouraged to take advantage of its services. Providing an outreach toolkit for network libraries: NLS recently released a toolkit providing guidance and materials such as customized posters and a webinar for librarians on how to effectively conduct outreach through partnerships, media, social media, and events. Staff at 6 of the 8 network libraries told us they wanted more guidance and assistance from NLS on outreach efforts such as these. While NLS is making efforts to improve outreach, it has not collected information necessary to evaluate these efforts. NLS’s plan and ongoing efforts to improve outreach address a number of best practices for outreach that we have previously identified, such as researching the target audience, identifying stakeholders, obtaining feedback, and using multiple approaches. However, NLS has not developed a plan for assessing its outreach efforts, also a best practice we previously identified. Generally, NLS officials told us they will judge the success of these new outreach efforts by determining whether there have been increases in the overall number of users, the number of users in particular target categories, and the number of visits to their website. However, these measures will not inform NLS as to which efforts directly resulted in new NLS users, which would help NLS allocate resources to those that are most cost-effective. Although staff we interviewed at 3 of the 8 network libraries said they have tracked how users heard about their services for this purpose, NLS does not obtain such data centrally. NLS Offers Materials in a Range of Formats, but Statutory and Other Limitations Impede Adoption of Potentially Cost- Saving Technologies NLS Provides a Range of Reading Formats, but Most Users Choose to Receive Audio Materials Through the Mail NLS offers its users several options for receiving both audio and braille reading materials, and the vast majority of NLS users choose to receive audio materials, primarily in the form of digital cartridges sent through the mail. NLS users may receive audio materials through the mail on digital cartridges or cassettes, download audio files from the Internet, receive hard copy braille documents through the mail, or download braille files from the Internet. According to NLS administrative data, almost 90 percent of NLS users received digital cartridges during fiscal year 2014, with the majority playing these cartridges on specialized audio devices provided by NLS, and a much smaller number using other, commercially- available devices. (See fig. 4.) About a third of NLS users continued to receive cassettes through the mail, although this format is being phased out. Downloading from the Internet was less popular than receiving materials through the mail, with only about 10 percent of NLS users downloading audio materials through NLS’s online Braille and Audio Reading Download (BARD) system. BARD enables eligible users to search for and select audio files for immediate download rather than wait to receive materials through the mail. These files may be transferred to a digital cartridge and played on NLS’s specialized device, or downloaded directly to and played on a variety of commercially available devices, such as smartphones. A much smaller proportion of NLS users chose to receive braille materials, whether in hard copy or downloaded from BARD and read on a refreshable braille device that converts an electronic text file into braille characters. Over the last 5 years, the majority of materials circulated to NLS users each year have been either digital cartridges or cassettes, although the number of items downloaded through BARD has been gradually increasing (see fig. 5). The number of digital cartridges has increased substantially since they were introduced in 2009, while the number of cassettes has declined as they are phased out. Meanwhile, the number of audio files downloaded annually from BARD more than doubled between fiscal years 2010 and 2014. Among braille materials, there has been a shift away from hard copy to electronic braille. Most users’ preference for receiving materials through the mail and playing them on an NLS-provided specialized audio device appears to be linked to their level of comfort with technology and their access to the Internet, according to interviews and survey data. NLS designed the digital cartridges and players that provide users with audio books and magazines to be easy to see and handle for those with visual and other disabilities. The program’s mainly older users feel comfortable with NLS’s specialized audio player because it is user friendly, according to staff at all 8 network libraries we contacted. For example, librarians in one state said many users like NLS’s player because it is durable and easy to use, and many—especially those who lost their vision later in life—do not feel as comfortable using commercially available audio devices. At the same time, younger NLS users—a minority of the customer base—may prefer to use other devices, such as smartphones, to access NLS audio materials, according to staff we spoke with at 6 of the 8 network libraries. Staff in one library said younger users tend to be more sophisticated in their use of technology, and prefer to use smaller, mainstream devices rather than the NLS player. (See fig. 6 for an image of NLS’s standard and advanced players and the commercial audio device which as of August 2015 had more registered NLS users than any other commercially available device.) Furthermore, some users lack Internet access or do not feel comfortable downloading files from the Internet. According to NLS’s 2013 user survey, about 40 percent of those not using BARD cited lack of Internet access as a reason. Staff at all 8 network libraries told us that many of their NLS users lack access either to the Internet or a computer. For example, staff in one library told us many of their NLS users have low incomes, or are older with fixed incomes, and many, especially in more rural areas, lack the high-speed Internet connection needed for BARD. According to NLS’s 2013 user survey, about 50 percent of sampled users who do not use BARD said they lacked the computer skills to do so. Similarly, staff in all 8 network libraries we contacted said the process of downloading files from BARD onto a computer, and then transferring them to a cartridge that can be played on an audio device, is challenging for some users. For example, staff in one library said users have difficulty figuring out which folder to save downloaded files into on their computers. Recognizing this, NLS officials told us they expect in summer 2016 to introduce a new software application known as Media Manager intended to simplify the process of downloading from BARD onto a computer by handling a number of the steps automatically. Meanwhile, the much lower use of braille compared to audio among NLS’s customer base may, in part, reflect the rate of braille use among blind people in the United States overall as well as characteristics of NLS users. The precise number of blind and visually impaired people who use braille in the United States is not known, according to a study on braille by LOC’s Federal Research Division, as well as officials from two national organizations that produce braille materials and an assistive technology company we contacted. However, according to the LOC study, several estimates suggest that the proportion of blind and visually impaired Americans who use braille may be about 10 percent. According to a research and advocacy group for the blind and an organization that produces braille materials, braille use declined after many blind students were moved from specialized schools for the blind, which are more likely to teach braille, into public schools. Another factor that has impeded the wider use of braille, according to an organization that provides braille materials and an assistive technology company we contacted, has been the high cost of refreshable braille devices, which sell for $1,000 to $2,000 at a minimum. Beyond reflecting braille use in the wider population, NLS users’ low use of braille may also reflect the specific demographics of the NLS population. Individuals who lose their vision later in life may be less likely to learn braille than those who were blind at an early age, according to staff from one library we contacted and a 2012 NLS report. NLS’s Efforts to Adopt New Technologies Are Hampered by Limitations in Its Statutory Authority and Analyses of Alternative Approaches NLS is considering whether to adopt several new technologies for delivering braille and audio content to its users which have the potential to improve services and reduce costs. However, in one case—providing refreshable braille devices to its users—NLS’s efforts are hampered by limitations in its authorizing statute, among other factors. In two other cases—developing an audio player with Internet connectivity and adding synthetic speech materials to its audio collection—the agency has not taken steps to assess the potential cost savings resulting from alternative approaches. Refreshable Braille Devices Promoting braille is one of the broad goals included in NLS’s draft strategic plan for 2016 to 2020, and the agency believes providing braille electronically will help achieve that goal. According to a 2012 NLS report, braille is the literacy medium for those who are blind and visually impaired, as unlike audio, it is a direct corollary to print and displays features of print, such as capitalization and punctuation. This view is consistent with those of several other organizations we contacted, including a research and advocacy organization serving people who are blind. There is also some evidence suggesting that blind people have better employment outcomes if they use braille. NLS officials told us they believe that the ability to loan refreshable braille devices could attract more users to NLS. The agency has cited several advantages of this technology compared to hard copy braille, including that it is less bulky to store and transport and can be delivered more quickly to users. (See fig.7 for images of a 13-volume hard copy braille book in NLS’s collection and an example of a refreshable braille device.) However, NLS is currently unable to provide refreshable braille devices to its users due to statutory language that limits its use of appropriated funds. Since the 1930s, the statute has authorized NLS to use appropriated funds to provide braille materials to its users. However, the statute does not allow NLS to use such funds to provide users with devices for reading electronic braille files. Although the statute did not originally allow NLS to provide users with any playback equipment, in the 1940s it was amended to allow NLS to provide devices for playing audio materials. In 2015, the LOC submitted a request to the Committee on House Administration and the Senate Committee on Rules and Administration to amend the law to allow it to use appropriated funds to provide playback equipment for formats in addition to audio recordings, including refreshable braille devices. In November 2015, legislation was introduced in the House of Representatives that would amend the law to allow NLS to use appropriated funds to purchase and provide to its users playback equipment for braille materials, among other things. The current cost of refreshable braille devices makes them cost- prohibitive for NLS; however, emerging technology may soon change that. As previously noted, several sources indicate that the current cost for these devices is about $1,000 to $2,000 at a minimum. According to one study we reviewed, the current technology used in these devices is effective, but it is also expensive to produce, in part because it relies heavily on manual assembly. However, efforts are underway to develop new refreshable braille technology that could significantly reduce the cost of these devices. For example, a consortium of organizations has supported research on refreshable braille technology and, according to one organization that has been involved in the effort, plans to unveil a prototype device in 2016 that could cost as little as $300. NLS hired a consultant to examine the potential costs and benefits associated with providing braille through lower-cost refreshable braille devices rather than hard copy. The resulting report, delivered in July 2015, found that the total annual cost of NLS’s current approach—including the costs for NLS to produce hard copy braille documents, for network libraries to store them, and for USPS to deliver them—is about $17 million. It found that if the cost of refreshable braille devices were to come down to about $400, then the total annual cost of an alternate approach in which NLS loans these devices to its users, and hard copy braille is largely replaced by electronic braille, could be about $7 million—a savings of almost $10 million per year compared to the current approach. According to standards for internal control in the federal government, agencies should identify, analyze, and respond to changes that may create the risk of not successfully fulfilling their missions, including changes in the technological environment. As long as its statute does not allow NLS to use appropriated funds to provide refreshable braille devices, NLS will not be able to take advantage of technological advances that could potentially help it fulfill its mission more cost efficiently. Audio Player with Wireless Connectivity NLS is in the preliminary stages of developing an audio player with wireless connectivity that could download audio directly from BARD, an approach that it believes would improve services for users and potentially reduce overall costs to the federal government. NLS officials said users would benefit from a device capable of downloading audio materials directly from the Internet because they would receive content faster than receiving digital cartridges through the mail. As noted above, there are obstacles to the wider use of BARD among NLS’s customer base, but an NLS-provided audio player with wireless connectivity could mitigate some of these issues. Specifically, such a device would eliminate the multi-step process now required to download BARD files to a computer and then transfer them onto NLS’s audio player. Staff we spoke with in 5 of the 8 network libraries commented that downloading audio files directly to an NLS player would be simpler than the current process. In addition, NLS is considering how it might address another obstacle—lack of Internet access—by providing not just the audio player but also the required Internet connectivity. At the time of our review, the goal of providing users with a device capable of connecting directly to the Internet was included in NLS’s draft strategic plan, and NLS officials said they were in the process of hiring a business analyst and project manager to more fully assess the business case for moving forward with this effort. LOC officials told us they expect NLS to submit a proposal for this initiative to LOC’s Information Technology Steering Committee during fiscal year 2016. As it considers moving forward with this effort, NLS is leaning toward designing its own next generation, specialized player, but it has not fully assessed the costs and benefits of designing its own player versus using a commercially available player. NLS officials said that, in their experience, the existing commercially-available players lack the durability needed for NLS’s purposes, may not be suitable for users with physical disabilities, and are expensive. Libraries for those with visual impairments in some other countries, meanwhile, have found that commercially available audio players can meet their users’ needs. For example, the CNIB Library, which provides free reading materials to those with visual and other disabilities in Canada, does not provide its own specialized device to users but instead helps them acquire commercially-available devices when they cannot afford to do so. CNIB officials said they chose this approach because it was less expensive than developing their own player, and also commented that it offers users a range of choices to meet their needs. Some libraries for the blind in Europe and Asia also purchase commercially-available audio players for library users, according to two assistive technology companies we contacted. NLS officials told us they have not ruled out using a commercially-available device as their next generation player, and while they have not yet analyzed this option, they plan to explore it further through requests for information and market research. We have previously found it is important for agencies to thoroughly analyze alternatives, including their relative costs and benefits, so they consistently and reliably select the project alternatives that best meet mission needs. In a 2007 report, we found that when NLS developed its current digital audio player, it did not sufficiently consider the option of acquiring a commercially-available device designed specifically for those who are blind or have physical disabilities, and we recommended that NLS develop and document analyses of alternatives including commercial products. At that time, NLS did not act on our recommendation and take steps to consider commercial products. We continue to believe that without such an assessment, NLS runs the risk of not choosing the most cost-effective approach for providing its next generation of audio players. Although NLS has relied exclusively on human narration to provide audio materials, text-to-speech—i.e., synthetic, computer-generated speech— may be acceptable to many NLS users, according to interviews and survey data. According to several organizations we contacted that serve those with visual impairments and two studies we reviewed, the sound quality of text-to-speech has improved over time. For example, one study found that while not quite equivalent to natural human speech, state-of- the-art text-to-speech is becoming more natural-sounding, with appropriate phrasing and pacing. In addition, evidence suggests that many NLS users may be willing to listen to text-to-speech materials. According to NLS’s 2013 user survey, almost 80 percent of sampled NLS users were willing to listen to text-to-speech audio materials. While staff at 4 of 8 network libraries we contacted said NLS users prefer human narration, staff in all 8 libraries said using text-to-speech is a viable option for certain types of NLS reading materials. In Canada, the CNIB uses text-to-speech for the front and back matter of the books it produces, and expects to incorporate more text-to-speech into its collection in the future. The CNIB website also has a link to a nonprofit organization that provides audio books primarily in text-to-speech format to those with visual and other disabilities, helping its users gain access to a collection of over 250,000 audio books. In addition, one assistive technology company told us that libraries for the blind in Europe regularly use text-to-speech for newspapers and magazines, and they often use it initially for best-selling novels so they can provide these quickly to their users. NLS officials are considering whether to supplement NLS’s audio collection with text-to-speech materials, but they have not assessed the costs and benefits of doing so, nor have they included moving forward with text-to-speech content as an objective in the agency’s draft strategic plan. NLS officials told us they might in the future use text-to-speech for certain types of reading materials for which human narration is less critical, such as reference materials, cookbooks, bibliographies, and endnotes. They said an advantage of text-to-speech materials is that they can be produced more quickly than human-narrated materials: Officials said it takes 3 to 4 months to record a book with human narration. Also, it may be less expensive to produce text-to-speech materials. Officials said it costs, on average, about $3,600 to record a book with a human narrator, and in fiscal year 2014 the agency spent $10.5 million on such recording. In contrast, they said it costs $75 to convert an audio book provided by a commercial publisher to NLS’s format, and they estimated that producing text-to-speech books might cost about the same. However, although NLS officials said they have done some preliminary experimentation to understand the high-level challenges of producing text-to-speech materials, and have hired a contractor to develop software for converting digital text files to text-to-speech files that meet NLS’s specifications, they have not made a decision about whether to move forward with text-to-speech. Furthermore, they have not yet comprehensively assessed the option of incorporating text-to-speech compared to relying solely on human narration, an assessment called for by best practices we previously identified for alternatives analysis. Thus, NLS lacks information about an initiative that has the potential to deliver content more quickly and cost effectively. Conclusions The NLS program provides accessible reading materials to those who cannot read standard print due to visual, physical, and other disabilities. Eighty-five years after the program was established, NLS is providing an important service to many older and visually-disabled adults, but it is also missing opportunities to meet the needs of all groups eligible for services. For example, the regulatory requirement that a medical doctor must certify eligibility for individuals with reading disabilities treats this group differently than other populations and creates an obstacle to receiving services. Likely because this requirement has remained largely unchanged for the past 40 years, it is inconsistent with currently accepted practices. Additionally, while NLS’s new outreach efforts have the potential to enhance awareness of its services among some eligible groups, NLS’s failure to evaluate these efforts means officials are unable to target funds to those efforts determined to be the most cost-effective, or make adjustments to those that are less effective. Looking ahead, NLS is considering emerging technologies to meet user needs. Yet there are factors both beyond and within NLS’s control that may prevent the adoption of potentially cost-saving alternatives. For example, without a change in federal law, NLS will have to forego the opportunity to provide braille in a more modern and potentially cost- effective manner by distributing refreshable braille devices to its users. Further, in the area of audio materials, NLS lacks the information it needs to make informed choices about whether and how to proceed with adopting certain new technologies. For example, if NLS continues its plan to design a specialized audio player that connects to the Internet, without assessing the alternative of instead providing commercially available devices to its users, the agency may potentially invest in a less cost- effective option. Similarly, absent a comprehensive comparison of adding text-to-speech materials to its audio collection versus continuing to rely only on human narration, NLS may not make an informed decision about whether to move forward with a technology that has the potential to decrease the time and costs of providing new materials to users. Matter for Congressional Consideration To give NLS the opportunity to provide braille in a modernized format and potentially achieve cost savings, Congress should consider amending the law to allow the agency to use federal funds to provide its users playback equipment for electronic braille files (i.e., refreshable braille devices). Recommendations for Executive Action 1. To ensure that it provides all eligible populations access to its services and that its eligibility requirements are consistent with currently accepted practices, the Library of Congress should re-examine and potentially revise its requirement that medical doctors must certify eligibility for the NLS program for those with a reading disability caused by organic dysfunction. 2. To ensure funds are directed to the most cost-effective outreach efforts, NLS should evaluate the effectiveness of its outreach efforts, including the extent to which different outreach efforts have resulted in new users. 3. To help it determine the most cost-effective approach for its next audio player, NLS should comprehensively assess the alternatives of designing its own specialized audio player versus providing commercially available players to its users. 4. To help it determine whether to supplement its collection of human- narrated audio materials with text-to-speech materials, NLS should thoroughly assess the text-to-speech option versus continuing to provide only human-narrated materials. Agency Comments and Our Evaluation We provided a draft of this report to LOC for its review and comment, and also provided relevant excerpts to USPS. In its written comments, included in our report as appendix I, LOC generally agreed with our recommendations and noted steps it plans to take to address them. For example, LOC agreed to reexamine and potentially revise its requirement that only medical doctors may certify NLS eligibility for people with reading disabilities to authorize other qualified persons to make such a certification. NLS has not predicted the increase in its users that may result from such a change, but it is exploring enhancements to its technological infrastructure that would support the increased demand for services that may result. With regard to our recommendation to evaluate its outreach efforts, LOC said it will look into implementing a new process for collecting data from network libraries on how NLS program users were referred to the program, as well as other ways of measuring the efficacy of various outreach approaches. Regarding our recommendations related to exploring new technologies, LOC indicated that NLS will thoroughly study various alternatives as it begins the process of developing the next generation of audio players, including the advantages and disadvantages of designing an NLS-specific player compared to using a commercially available player. LOC also indicated that NLS is exploring the use of text- to-speech technology as a way to expand its offerings, and NLS will introduce this technology through a pilot program and solicit feedback from users and network libraries to assess their acceptance of this approach. LOC and USPS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Librarian of Congress, the Director of NLS, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Library of Congress Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, individuals who contributed to this report include Rachel Frisk, Assistant Director; Lorin Obler, Analyst- in-Charge; Nora Boretti, Leia Dickerson, Holly Dye, Alexander Galuten, Melissa Jaynes, Tammi Kalugdan, Bob Kenyon; Kaelin Kuhn, Dainia Lawes, Sheila McCoy, Almeta Spencer, and Walter Vance. Related GAO Products Library of Congress: Strong Leadership Needed to Address Serious Information Technology Management Weaknesses. GAO-15-315. Washington, D.C.: March 31, 2015. High Risk Series: An Update, GAO-15-290. Washington, D.C.: Feb. 11, 2015. DOD and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: Dec. 11, 2014. Social Security Disability: Additional Outreach and Collaboration on Sharing Medical Records Would Improve Wounded Warriors’ Access to Benefits. GAO-09-762. Washington, D.C.: Sept. 16, 2009. Talking Books for the Blind. GAO-07-871R. Washington, D.C.: June 12, 2007.
NLS, within the Library of Congress (LOC), provides free audio and braille materials for U.S. citizens and residents who cannot read standard print due to visual and other disabilities. In fiscal year 2016, the NLS program received about $50 million in federal funds to provide these materials through a national network of libraries. The House report accompanying the fiscal year 2016 legislative branch appropriations bill included a provision for GAO to review NLS's users and the technology it employs to meet their needs. GAO examined (1) the characteristics of NLS users and the steps NLS is taking to ensure eligible individuals' access and awareness, and (2) how NLS provides materials and the extent to which it is considering emerging trends in technology. GAO reviewed relevant federal laws and regulations, NLS documents, and administrative data; interviewed NLS officials, librarians from 8 of the 101 network libraries selected for geographic diversity and a range in the number of users, and officials from research and advocacy groups and assistive technology companies; and reviewed literature on NLS-eligible populations and trends in assistive technologies. The National Library Service for the Blind and Physically Handicapped (NLS) is primarily used by older adults with visual disabilities, and NLS has taken some steps to ensure eligible users' access to and awareness of available services. In fiscal year 2014, about 70 percent of the program's 430,000 users were age 60 and older and almost 85 percent had visual disabilities, according to the most recent NLS data available at the time of GAO's review. Federal regulations establish eligibility for NLS services for people with a range of disabilities. However, medical doctors must certify eligibility for people with reading disabilities such as dyslexia, which is not required for those with visual or physical disabilities. According to officials from network libraries and other stakeholder groups, the requirement for a doctor's certification is an obstacle to accessing services because of additional steps and costs to the individual. These officials and stakeholders said other professionals, such as special education teachers, are also positioned to certify eligibility for applicants with reading disabilities. GAO has previously noted the importance of disability programs keeping pace with scientific and medical advances. However, the certification requirement has remained largely unchanged for more than 40 years. NLS has taken steps to inform eligible groups about its services, such as partnering with other organizations that serve these groups, developing a new website, and distributing an outreach toolkit to network libraries. However, NLS has no plans to evaluate which outreach efforts have resulted in new users in order to ensure resources are used effectively—a key practice identified previously by GAO. NLS offers materials to its users in a range of formats, but its efforts to adopt new, potentially cost-saving technologies are hampered by limitations in both its statutory authority and its analyses of alternatives. Users may choose to receive, through the mail, audio materials on digital cartridges or hard copy braille documents. Users may also choose to download audio and braille files from an NLS-supported website. During fiscal year 2014, 86 percent of users chose to receive audio materials on digital cartridges, according to NLS data. NLS officials said they would like to provide users with devices for reading electronic braille files, a faster and less bulky approach than braille documents, and per the agency's July 2015 analysis, could become more cost effective with technological advances. However, federal statute does not authorize NLS to use program funds to acquire and provide braille devices as it does for audio devices, which prevents the agency from taking advantage of technology that has the potential to reduce costs. NLS is also examining new technologies for audio materials but has not fully assessed available alternatives. For example, NLS is considering supplementing its collection of human-narrated audio materials with text-to-speech (i.e., synthetic speech) materials, which some evidence suggests could be produced more quickly and at a lower cost. However, NLS has not comprehensively compared the text-to-speech option to its current approach in order to make a decision on whether to move forward, as called for by GAO best practices for alternatives analysis. Without this analysis, NLS may miss an opportunity to meet its users' needs more efficiently and cost effectively.
Background IRS founded PRP in 1976 to provide an independent means of helping taxpayers solve problems that they encountered in dealing with IRS. Initially, PRP units were established in IRS district offices. In 1979, IRS expanded PRP to its service centers and created the position of Taxpayer Ombudsman to head PRP. The Ombudsman was appointed by and reported to the Commissioner of Internal Revenue. Congress has since renamed the Ombudsman the “National Taxpayer Advocate” and shifted appointment authority to the Secretary of the Treasury. However, the Advocate continues to report to the Commissioner. To promote the independence of the Advocate, Congress required that the individual not be an IRS employee for 2 years preceding his or her appointment and that the individual not accept a position elsewhere in IRS for 5 years following his or her tenure as the Advocate. The current Advocate was appointed in August 1998, in accordance with these provisions. Additionally, to enhance independence, the Advocate is required to submit annual reports directly to Congress on the objectives and activities of the Advocate’s Office. These reports are to be developed by the Advocate’s Office and are to be submitted directly to Congress without any prior review or comment from the Commissioner of Internal Revenue, the Secretary of the Treasury, or any other officer or employee of the Department of the Treasury. The Advocate manages an organization (i.e., the Advocate’s Office) that has advocates in each of IRS’ 4 regions, 33 districts, 30 former districts,and 10 service centers and in the Executive Office for Service Center Operations (EOSCO) in Cincinnati, OH, and the Office of the Assistant Commissioner (International) in Washington, D.C. The regional and EOSCO advocates are responsible for providing program oversight and support to advocates in the district, former district, and service center offices (hereafter referred to as “local advocates”), who manage PRP operations at the local level. This program oversight and support is to include reviewing PRP casework, ensuring the training of PRP staff and staff in the Advocate’s Office, dealing with sensitive individual cases, pursuing advocacy initiatives, and handling potential hardship cases. Formerly, the regional and local advocates were selected by and reported to the director of the regional office, district office, or service center where they worked. However, the IRS Restructuring and Reform Act of 1998 changed that relationship. Now, regional advocates are to be selected by and report to the National Taxpayer Advocate, and local advocates are to be selected by and report to regional advocates. At the time we completed our review in May 1999, most PRP casework was being done by district office and service center staff, called caseworkers, who were employees of IRS’ operating functions, such as Customer Service, Collection, and Examination. In addition, each function doing PRP work had a coordinator to ensure that all PRP cases within the function were assigned to caseworkers and placed under PRP control and to provide coordination between the function and the local advocates. PRP caseworkers and coordinators were funded by the operating functions, not the Advocate’s Office. Also, each district office and service center had its own PRP structure, which reflected differences in office size and composition and in the way PRP caseworkers were managed. In the past, PRP caseworkers were generally assigned to functional units and reported to functional managers. In recent years, some local offices centralized their PRP casework in PRP units staffed by functional employees who reported directly to the local advocate. Objectives, Scope, and Methodology procedures. Our objectives were to (1) identify challenges the Advocate faces in managing program resources, (2) identify potential effects of workload fluctuations on program operations, (3) determine what information was available for advocacy efforts, and (4) assess the adequacy of performance measures IRS used to gauge program effectiveness. For all of our objectives, we interviewed Advocate Office officials at IRS’ National Office, all 4 IRS regional offices, and EOSCO, and we interviewed local advocates, PRP coordinators, and caseworkers in 17 of IRS’ 73 local offices—including 9 district offices, 4 former district offices, and 4 service centers. For the first objective, we also surveyed Advocate Office and PRP staff, including PRP caseworkers, at IRS locations where Advocate’s Office and PRP work was being done. Additionally, for the third and fourth objectives, we reviewed program documents on the Advocate’s Office and PRP, including guidance on advocacy efforts, and program management information, including goals and measures for the program. For a more detailed account of our scope and methodology, including IRS offices visited and limitations of our surveys, see appendix I. We did our work from December 1997 to May 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue. His written comments are discussed near the end of this report and are reprinted in appendix VII. Resource Management Issues Present Challenges for the Advocate We identified several resource management issues within the Advocate’s Office and PRP that could affect how efficiently and effectively taxpayers are helped by PRP. Specifically, (1) the Advocate’s Office did not know how many staff were working in PRP or the costs associated with that staffing; (2) the Advocate’s Office had to rely on resources provided by IRS’ operating functions, such as Customer Service, Collections, and Examination; (3) some Advocate Office and PRP staff reported that they lacked training that Advocate Office officials considered necessary; and (4) the Advocate’s Office faced obstacles, such as limited advancement opportunities, in acquiring and keeping qualified staff. Addressing such issues presented a particular challenge to the Advocate’s Office because it has not had direct control over most PRP resources. IRS has begun to address some of the resource management issues we identified. Other changes to the Advocate’s Office and PRP are being considered in conjunction with a major effort that IRS has begun to substantively revise its organizational structure. However, it is too soon to tell how these changes will affect program operations. The Advocate’s Office Lacked Information on Functional PRP Staffing The Advocate’s Office did not know the total number, time, and thus, cost of staff devoted to PRP because IRS did not have a standard system to track functional staff doing PRP work. The absence of this basic staffing information yields an incomplete picture of program operations, places limitations on decisionmaking, and hinders the identification of matters requiring management attention. For example, without complete staffing information, IRS does not know the total cost of the program; and it cannot project the potential cost, for planning purposes, of any prospective changes or enhancements to the program. IRS’ operating functions (e.g., Customer Service, Collection, and Examination), which provided most of the staff working in PRP, had systems to track the amount of time employees devoted to PRP, but each function tracked time spent differently. The procedures varied, from having employees charge all time spent to resolve a case to PRP, to having them charge only a minimal amount of time to PRP. Because of the situation just described, IRS was unable to tell us how many functional staff were working on PRP activities and how much time those staff were devoting to PRP work. To get information on the number of functional staff, with the help of the Advocate’s Office, we sent out surveys that solicited staffing information from all locations where Advocate Office and PRP work was being done. Summaries of the responses to our surveys are presented in appendixes II and III. Our surveys showed that on June 1, 1998, there were 508 staff working in the Advocate’s Office and another 1,532 functional employees doing PRP casework. Selected survey results for those 1,532 PRP caseworkers are presented in appendix IV. As shown in that appendix, the 1,532 PRP caseworkers included 726 district office employees and 806 service center employees. Reliance on Functional Resources Contributed to Staffing Issues The Advocate’s Office had to rely on other management officials within IRS to provide most of the resources—including staff, space, and equipment—needed to do PRP casework. Because there was no direct funding for PRP, however, functional managers had to carve out resources for PRP from their operating budgets. This arrangement required district and service center directors to shift employees and other resources into or out of PRP as workload demands changed. When functional needs conflicted with PRP needs, there were no assurances that PRP needs would be met. This arrangement also meant that functional managers, not local advocates, determined which employees would do PRP casework— leaving the local advocates with little, if any, control over the quality of the caseworkers. Local advocates told us that good communication and working relationships with other managers within IRS was imperative to receive the support needed to meet PRP goals. Another result of the relationship just described is that local advocates were not responsible for preparing official performance evaluations for most PRP caseworkers. In that regard, our surveys showed that about 80 percent of the PRP caseworkers reported to and were evaluated by functional management. Such a situation could have affected the ability of caseworkers to resolve taxpayers’ problems impartially, because those problems could involve disputes between the taxpayer and the function responsible for evaluating the caseworker. Also, this situation could have led to a perception that PRP was not an independent program. Independence—actual and apparent—is important because, among other things, it helps promote taxpayer confidence in PRP. Because the Advocate’s Office did not have direct control over functional employees, PRP caseworkers could sometimes be pulled from PRP duties to do other work. For example, IRS officials said that PRP caseworkers were often required to help the customer service function answer taxpayer telephone calls during the filing season. While caseworkers were helping customer service, they were still responsible for their PRP work. Since PRP did not have a direct budget, IRS officials said that it was easy for PRP “to fall through the cracks” in terms of getting other resources, such as equipment and space, for PRP work. When PRP caseworkers told us of their equipment needs, they generally mentioned computers for word processing, ready access to Integrated Data Retrieval System (IDRS) terminals, telephone lines and voice mail, fax machines, and basic office supplies. Given the nature of PRP work, caseworker access to IDRS and the availability of such things as word processing equipment, voice mail, and telephone lines would seem essential. For example, we observed that caseworkers often prepared handwritten correspondence to taxpayers that was not as professional looking as correspondence prepared on word- processing equipment. Some Advocate Office and PRP Staff Had Not Received Training Considered Necessary According to our survey, some Advocate Office staff and PRP caseworkers had not received training that Advocate Office officials considered necessary. Our surveys of IRS staff who were doing Advocate Office work as of June 1, 1998, indicated the following: Only 35 percent of the staff in the Advocate’s Office had attended an Advocate’s Office training course for their current position. These positions included advocate, analyst, and PRP specialist. Fewer than half of the caseworkers had attended a PRP caseworker training course. Almost 78 percent of the caseworkers had received PRP quality standards training. This training is designed to ensure that caseworkers are aware of the standards for acceptable PRP casework. Although our survey indicated that caseworkers often had not completed a formal PRP caseworker training course, the survey also indicated that about 85 percent of the caseworkers had received on-the-job training. While on-the-job training can be an effective means of teaching, it opens up the possibility of inconsistencies in the way the program operates. In addition to being trained in PRP matters, Advocate Office officials said that caseworkers should continue to receive training in functional matters. Functional training, such as training in tax law changes, is important because resolving taxpayer problems requires that caseworkers understand the tax laws affecting a particular case. Our surveys indicated, however, that about half of the PRP caseworkers had not received such functional training. Both Advocate Office and PRP staff who we talked to said that there was no established training schedule, so they did not always know what training was being offered. When they did hear about training, it was sometimes too late to sign up. Some staff said that it had been several years since they had received any formal PRP training. Both Advocate Office staff and PRP caseworkers told us that, in addition to the basic training needed to do PRP work, they would like training in other areas, especially tax law changes. Also, many caseworkers wanted cross- functional training so that they could work cases across functional lines. IRS officials said that cross-functional training would broaden caseworker skills and might lead to faster and more accurate service to taxpayers. Additionally, they said that by broadening caseworkers’ skills and thus expanding the types of cases that they could work, cross-functional training could help the Advocate’s Office manage workload fluctuations. Obstacles Existed to Acquiring and Keeping Qualified Staff Obstacles existed that could adversely affect the ability of the Advocate’s Office to acquire and keep qualified staff. Those obstacles included (1) the absence of standard position descriptions for PRP caseworkers that could be used to help ensure that qualified staff were assigned to PRP work and (2) limited opportunities for advancement within the Advocate’s Office and PRP. Obstacle to Acquiring Qualified Staff There were no standard position descriptions for PRP caseworkers. Instead, PRP caseworkers worked under the position descriptions for employees in their functional organizations. This situation permitted functional managers to fill PRP caseworker positions through a noncompetitive process. IRS officials said that management usually asked for volunteers to work in PRP; if no volunteers came forward, management usually assigned staff to PRP, based on reverse seniority (i.e., staff with the least seniority would be assigned to PRP if not enough volunteers were forthcoming). Without competition, less qualified staff could be assigned to PRP. IRS officials said that if there were standard PRP caseworker position descriptions, staff from the operating functions would have to meet a set of qualifications and then compete for caseworker positions. As in any organization, when staff compete for a position, management is afforded an opportunity to select from among the best-qualified staff for the duties prescribed for that position. Obstacle to Keeping Staff IRS officials said that the grade structure and size of the Advocate’s Office and PRP limited opportunities for staff to advance within these organizations. There were gaps in the Advocate’s Office and PRP grade structure. These gaps meant that, at some point, staff who wanted to advance their careers would have to leave the Advocate’s Office or PRP to get a promotion elsewhere in IRS—generally in an operating function. In addition, the small size of the Advocate’s Office and PRP (in terms of number of positions) meant that there were fewer numbers of promotions available than in the larger operating functions. Advocate Office and PRP staff who we talked to had mixed views on whether they would have promotion opportunities within the functions. Because the Advocate’s Office was “off line” from the operating functions, many of the Advocate Office staff said that they would not have the background necessary to compete for a promotion in an operating function. Instead, they said that the only way for them to leave the Advocate’s Office would be through a lateral transfer. Many caseworkers told us, however, that the knowledge and skills they acquired in PRP could potentially enhance their opportunities in their functional organizations. IRS Has Begun to Address the Resource Management Issues Facing the Advocate’s Office IRS has taken some actions and has others planned that are related to the resource management issues previously discussed; however, it is too soon to tell if these actions will fully address these issues. want to continue working in PRP because increased workloads and higher program visibility made the job too stressful. organizational units serving particular groups of taxpayers, such as wage earners and small businesses. In commenting on a draft of this report, IRS said that, beginning in October 1999, it will have the ability to track Advocate Office resources accurately. Resource management changes include realigning the PRP staffing so that all caseworkers report to local advocates, not functional management. This should give the Advocate’s Office more control over PRP resources. In October 1998, IRS announced that those caseworkers who were already reporting to local advocates—about 20 percent—would officially be part of the Advocate’s Office. In addition, at the time we completed our audit work, IRS was developing an implementation plan to have the remaining 80 percent of the caseworker positions assigned to local advocate offices during fiscal year 1999. To complement the shift to direct reporting of caseworkers and further strengthen the independence of the Advocate’s Office, IRS established for fiscal year 1999 a separate, centralized financial structure for managing all Advocate Office resources. This structure covers the resources allocated to the Advocate’s Office and includes PRP caseworkers who have been transferred from the functions to the Advocate’s Office. Having a separate, centralized structure gives the Advocate’s Office control over its resource and should prevent Advocate Office funds from being redirected to other IRS programs. As of May 1999, IRS was also developing and updating the training for Advocate Office staff and PRP caseworkers. The training is to reflect the new operating structure and procedures for the Advocate’s Office as part of the agencywide redesign effort. Training needs for all Advocate Office and PRP staff are to be identified by the end of fiscal year 1999. Other actions could make it easier for the Advocate’s Office and PRP to acquire and keep qualified staff. In that regard, IRS has developed position descriptions for all staff working in the Advocate’s Office and PRP. The Advocate said that all positions within the Advocate’s Office and PRP will be filled competitively using the new position description. This includes having all existing Advocate Office and PRP staff reapply for their current positions. IRS has reevaluated the PRP caseworker duties and, in many cases, the new caseworker positions are higher-graded than the current caseworker positions. According to the Advocate, the competition for the positions should help ensure that the best staff are selected for the Advocate’s Office and PRP. Additionally, in an effort to attract and keep qualified staff, the IRS Restructuring and Reform Act of 1998 required that the Advocate develop a career path for local advocates who choose to make a career in the Advocate’s Office. In response to this requirement, the Advocate, at the time we completed our audit work, had a plan that would not only provide a career path within the Advocate’s Office and PRP, but would also enable Advocate Office and PRP staff to compete for jobs in the operating functions. Handling Workload Fluctuations Poses a Challenge as the Advocate’s Office Restructures According to local advocates, dealing with workload fluctuations— especially increased workloads—poses a challenge for them as Advocate’s Office and PRP operations are restructured. (See app. V for factors that have increased and could increase PRP’s workload.) IRS uses “cases closed” as its indicator of PRP workload. As figure 1 shows, the number of PRP cases closed in fiscal years 1993 through 1998 varied from year to year. Historically, because PRP was staffed by the functions, the Advocate’s Office relied on the functions to provide additional staff to cover workload increases. However, as discussed previously, this reporting structure could have led to the perception that PRP lacked independence. In an attempt to alleviate this possible perception, and as part of the restructuring effort, all caseworkers are to be placed in the Advocate’s Office. Workload Increases Could Affect PRP Services Workload increases may make it necessary for the Advocate’s Office to decide which cases to address with PRP resources. PRP was designed to help taxpayers who were unable to get their problems resolved elsewhere in IRS. However, the Advocate told us that he was committed to helping any taxpayer who contacts the office. We understand why the Advocate’s Office might not want to turn away any taxpayers seeking help. However, if the Advocate’s Office accepts cases that could be handled elsewhere in IRS, PRP could be overburdened, potentially reducing its ability to help taxpayers who have nowhere else to go to resolve their problems. As one local advocate said, “if everything is a priority, then nothing is a priority.” Overburdening PRP could also result in less staff time available for the Advocate’s Office to devote to advocacy. We discuss that issue in more detail in the next section. In its plan to redesign the Advocate’s Office, IRS has acknowledged the need to handle workload fluctuations. In the event of workload increases, the Advocate’s Office needs to be able to either decrease the number of cases entering the program, increase the number of staff working on cases, or some combination of both. The Advocate said that for workload increases, IRS plans to detail additional staff from the functions, as necessary. In his written comments on a draft of this report, the Commissioner of Internal Revenue stated that IRS had recently modified the PRP criteria to create a “taxpayer focused balance between cases handled by the Taxpayer Advocate and other functions.” We did not have time to evaluate the potential impact that the modified criteria might have on PRP workload. (See app. V for a list of the PRP criteria.) Better Information Could Help Ensure the Best Use of Resources for Advocacy Efforts Through advocacy, the Advocate’s Office is to identify the underlying causes of recurring taxpayer problems and recommend changes in the tax law or in IRS’ systems or procedures. Advocacy is key to the success of the Advocate’s Office because the improvements it generates could reduce the number of taxpayers who ultimately require help from PRP. However, the advocates we talked with indicated that the demands on Advocate Office staff and PRP caseworkers to resolve individual taxpayers’ problems left little time to spend on advocacy. In addition, because of limitations in the kind of information available to and compiled by the Advocate’s Office, there was little assurance that the time being spent on advocacy was being used most effectively. As of May 1999, the Advocate was considering various ideas for improving the advocacy process. Time Spent Working on Individual Taxpayer Problems Has Limited the Time Available for Advocacy As discussed more fully below, the advocacy process involves all levels of the Advocate’s Office, from the Advocate to local caseworkers. Nevertheless, most of the advocates we talked with said that the need to work on individual taxpayer problems limited the amount of time that advocate staff and PRP caseworkers could spend on advocacy. In that regard, our surveys indicated that, as of June 1, 1998, advocates and their staffs were spending about 60 percent of their time on problems of individual taxpayers and about 10 percent of their time on advocacy.PRP caseworkers were spending almost all of their PRP time on problems of individual taxpayers. The Advocacy Process The advocacy process within the Advocate’s Office involves all levels of the organization, from the Advocate to local caseworkers. The Advocate’s Office is responsible for (1) assisting, supporting, and guiding advocacy efforts at all levels; (2) identifying issues with nationwide implications and assigning responsibility for conducting research on these issues to regional advocates; and (3) compiling information on the status of ongoing and completed advocacy projects. Advocacy projects are intended to develop recommendations for improving IRS processes and procedures that can be forwarded to the function responsible for the processes or procedures. Advocate staff are to monitor the implementation of the recommendation and, in instances in which no action is taken by the function, the Advocate can compel the function to implement the recommendation by issuing a Taxpayer Advocate Directive. The Advocate was delegated this authority in March 1998, and on December 7, 1998, he issued the first directive requiring that IRS operations abate penalties on some “innocent spouse” cases. As of April 1999, there had been no other directives issued. Each region and EOSCO has an advocacy council comprised of Advocate Office staff and functional executives and staff. These councils are responsible for promoting advocacy and serve as clearinghouses for advocacy efforts by evaluating the merits of recommendations proposed at the regional and local office levels and by assigning projects to local advocates for further research. In evaluating the merits of local and regional advocacy recommendations, the councils are to either (1) endorse a recommendation and forward it to the National Office, (2) decide that a recommendation has merit but that more work needs to be done to support it, or (3) decide that the recommendation does not have merit. The councils are also responsible for providing guidance to local offices on advocacy projects and for ensuring that local offices receive feedback on advocacy projects. Local advocates receive ideas for advocacy efforts from a variety of sources, such as PRP’s case inventory system, PRP caseworkers, functional staff, and tax practitioners. In some cases, immediate action can be taken by local managers to improve local procedures and prevent local administrative problems. In other cases, the idea may be studied at the local office and recommendations for improvement can be forwarded to the responsible advocacy council for agencywide consideration. Advocacy recommendations also come from the Equity Task Force, which was chartered to make recommendations to the Advocate. The task force is comprised of a cross-section of IRS functional executives, functional staff, and Advocate Office staff. Recommendations from the task force are designed to further the interests of fairness in tax administration. Inadequate Information Available to Ensure the Most Effective Use of Time Spent on Advocacy We understand the need for the Advocate’s Office to give priority to individual taxpayer problems (i.e., casework) over advocacy when there is not enough time to do both. If the PRP workload were to increase, it could become even more difficult for the Advocate’s Office to find time to spend on advocacy. The Advocate’s Office must, therefore, make the best possible use of the time available for advocacy. However, at the time of our review, the Advocate’s Office did not have the kind of information needed to (1) make sound decisions on which projects to undertake and (2) protect against wasteful duplication of effort. Information On Which Projects To Work Was Limited The Advocate’s Office did not systematically gather the information needed to identify and prioritize advocacy projects. For the most part, advocacy projects were identified by analyzing the codes used to categorize taxpayer problems for the Advocate’s case inventory system. However, IRS officials said that analyzing these codes was not the best means of identifying advocacy projects because the codes do not provide enough information on the nature of the problems. For example, one code indicates that the problem involved a lost or stolen refund. However, there is no way to tell from the code why the case ended up in PRP. There are normal procedures for taxpayers and IRS to follow in getting a lost or stolen refund replaced. The fact that such a case ended up in PRP does not indicate whether there was some procedural failure that resulted in IRS’ inability to produce a replacement refund for the taxpayer. This level of detail may be available on individual cases—in the case history section—however, there is no way to search the Advocate’s case inventory system for this information. As a result, the inventory system can only describe the frequency of taxpayer problems; it cannot describe why the problem ended up in PRP. In the absence of such information, the Advocate’s Office does not know which advocacy efforts have the greatest potential to resolve recurring taxpayer problems. Comprehensive Information on Advocacy Projects Not Available We found that the Advocate’s Office did not have a comprehensive source of information on all proposed, ongoing, and completed advocacy projects. Additionally, we found that field staff did not always have access to information on advocacy projects. Because advocacy projects can be started at any level in the Advocate’s Office, it is important that everyone have access to comprehensive information on advocacy projects. Among other things, such information should help enhance coordination and prevent unnecessary duplication of effort. Information on advocacy efforts is available from the (1) the Advocacy Project Tracking System, (2) an inventory of legislative recommendations, and (3) the Commissioner’s Tracking System. None of these three sources provides a comprehensive listing of advocacy projects. The Advocacy Project Tracking System is a document maintained by the Advocate’s Office that includes completed and ongoing advocacy projects from IRS’ four regional offices and EOSCO. The document is presented as a matrix with information on each project, such as the project title, a short description of the project, the project contact points, and the status of the project’s recommendations. The matrix was last updated as of September 30, 1998, and there were 39 projects listed. The matrix does not, however, contain any information on what projects were ongoing or planned at the local offices. The inventory of legislative recommendations aimed at changing the current tax law is a document that presents potential legislative changes recommended by each region and EOSCO. The list is also presented in matrix format with information on each recommendation, such as whether there is an advocacy project related to the issue, what the proposed action is, and the status of the recommendation. The list is tracked by fiscal year, and those recommendations that the Advocate endorsed are to be included in the Advocate’s Report to Congress. As of April 1999, there were seven legislative recommendations on the list. This listing, however, is limited to proposed changes to legislation and does not contain any information on ongoing, proposed, or completed advocacy projects. The Commissioner’s Tracking System is a document that is to contain information on all advocacy memorandums that were sent to functional management. Advocacy memorandums are to be sent from the Advocate to a National Office function when the function resists or does not respond to a recommendation that the Advocate feels will alleviate harm to taxpayers or decrease taxpayer burden. Advocacy memorandums are to be sent only after other inquiries or attempts to resolve the issue have proved unsuccessful. The function is asked to respond in writing to the advocacy memorandum. If the functional area does not provide a satisfactory reason for not implementing a recommendation, the Advocate can compel the function to implement the recommendation by issuing a Taxpayer Advocate Directive, as explained previously. Information on the Commissioner’s Tracking System and information from the directives are to be included in the Advocate’s Annual Report to Congress. As of April 1999, the Advocate had written 12 advocacy memorandums. This document only contains those advocacy recommendations that were not willingly implemented by the functions and is not a complete source of advocacy projects. The lack of a comprehensive source of information on all advocacy projects increases the risk of unnecessary duplication of effort. In that regard, staff at several locations said that regional and local offices are often studying similar problems. For example, an IRS regional official said that two of IRS’ regions were studying projects on several similar issues, including (1) the application of taxpayer overpayment credits and (2) waiving the 10 percent additional tax penalty on withdrawals from Individual Retirement Accounts in hardship cases. The official also said that there was even less awareness of ongoing efforts at the district office and service center levels than at the regional level described above. A more comprehensive source of information on advocacy projects might also help provide staff with better feedback on project results. Although each IRS region and EOSCO has an advocacy council that is responsible for providing feedback to district and service center staffs, the local advocates and council members with whom we talked said that there was no formal mechanism for providing feedback. In particular, council members said that staff who had worked on projects were not receiving status reports or even an acknowledgment that recommendations from their projects were being considered for possible implementation, even though the councils were responsible for providing this information. In one district, for example, the local advocate said that he had forwarded the same advocacy recommendations to the regional council over the course of several years, but had not received feedback concerning what actions, if any, were taken on these recommendations. Proposed Changes for Strengthening Advocacy Efforts Are Under Consideration In October 1998, the Advocate established a team to develop recommendations to strengthen advocacy within IRS. The team’s goal was to develop and recommend a national advocacy plan, and the team made several recommendations to the Advocate for initiating and coordinating advocacy projects. For example, in its February 1999 report to the Advocate, the team recommended that a national advocacy database be developed to provide a single source of information on ongoing advocacy projects. The national database would contain information on all advocacy efforts, including those at the national, regional, and local levels. In addition, the database would be accessible to all local advocates. As part of the effort to redesign the Advocate’s Office, the Advocate said that he plans to adopt the recommendations from the team. Also, the redesign plan calls for establishing separate casework and advocacy units, each with its own dedicated staff, thus reducing the possibility that advocacy will suffer in times of high PRP caseloads. The Advocate’s Office Lacked Adequate Measures of Effectiveness IRS lacked adequate measures of the effectiveness of the Advocate’s Office and PRP. The set of measures used by the Advocate’s Office during our review (1) provided descriptive information about program activities, such as the average amount of time it takes to close a PRP case, rather than information needed to assess effectiveness; (2) did not provide complete data; or (3) were not based on consistent data collection. IRS has efforts under way to improve program measures. Those efforts, at least as they relate to the Advocate’s Office, may be hampered by existing information systems. While it is necessary for an organization to measure program activities, such as average case processing time, the more important and more difficult task is to develop measures of effectiveness that focus on the impact of an agency’s programs on its customers. Measures of effectiveness are important because they provide data to improve program performance, increase accountability, and support decisionmaking. We found that the Advocate’s Office had measures to gauge certain aspects of PRP’s performance, but that these measures could not fully assess PRP’s effectiveness. For example, the Advocate’s Office did not have a method for measuring customer satisfaction, and there was no mechanism for determining the effectiveness of advocacy efforts—both of these measures would help the Advocate understand how effective the program is in helping taxpayers. Existing PRP Measures Are Inadequate During our review, the Advocate’s Office was using the following four measures to gauge PRP’s performance: (1) average processing time to close PRP cases, (2) currency of PRP case inventory, (3) quality of casework, and (4) case identification rate. Although those measures provided some useful information, they did not provide all of the information needed to assess PRP’s effectiveness. (See app. VI for more information on these measures.) The first two measures, average processing time and currency of case inventory, provide descriptive information about program activities. Although these measures are useful for some program management decisions, such as the number of staff needed at a specific office, they do not provide information on PRP’s effectiveness. The third measure is designed to determine the quality of PRP casework. Although this measure provides some data on program effectiveness, it provides no information on customer satisfaction. In commenting on a draft of this report, IRS said that it is developing ways to measure PRP customer satisfaction and that it plans to test and refine the measure beginning in October 1999. By helping taxpayers resolve problems that were not resolved elsewhere in IRS, the Advocate’s Office plays a pivotal role in delivering customer service to the taxpayers. Customer satisfaction data from taxpayers who contacted PRP could provide the Advocate’s Office with information on how taxpayers feel about the service they received and whether taxpayers consider their problems solved. Without this information, the Advocate’s Office is not in the best position to improve program operations to better satisfy taxpayer needs. The fourth measure, PRP case identification, attempts to determine if service center employees are properly identifying potential PRP cases from incoming correspondence. This measure is an important tool to help the Advocate’s Office know whether PRP actually serves those taxpayers who qualify for help from the program. However, the measure provides an incomplete picture because it is designed for use only at service centers. There is no similar measure to determine how well district offices and toll- free telephone call sites are identifying and referring potential PRP cases. Also, a recent review of the PRP case identification measure by IRS’ Office of Internal Audit disclosed, among other things, that inconsistent data collection could affect the integrity and reliability of the measure’s results. For example, Internal Audit found that the test was not always performed the required number of times per month at each service center and that the mail sample was not based on the service center’s incoming mail population. Additionally, when Internal Audit performed a parallel PRP case identification sample—in accordance with national standards—its rates for some service centers were significantly lower—in one case, over 55 percentage points lower—-than rates reported by the service center. In addition to the shortcomings of these four measures, IRS lacked a method for determining the effectiveness of its advocacy efforts. Advocacy is a major responsibility of the Advocate’s Office and is aimed at ultimately reducing the number of taxpayers needing help from PRP. Without information on the effectiveness of these efforts, the Advocate’s Office does not know, for example, which efforts provide the greatest benefit to taxpayers. Current Information Systems May Limit IRS’ Ability to Develop Needed Measures The Advocate is working to improve Advocate Office and PRP measures of effectiveness. In January 1999, as part of its efforts to redesign the Advocate’s Office, IRS established a task force to determine what measures are needed to assess program effectiveness. Specifically, the group is tasked to research, identify, and develop corporate level measures for Advocate Office program results, customer satisfaction, and employee satisfaction. Their ability to develop needed measures of effectiveness, however, may be hampered by existing information systems. The Taxpayer Advocate Management Information System is comprised of the Problem Resolution Office Management Information System (PROMIS), the Customer Feedback System, and the PRP Case Identification and Tracking System. These systems do not provide the Advocate’s Office with the data it needs to assess the effectiveness of PRP operations. PROMIS is a computerized inventory control and report system that includes information from individual PRP cases, such as the taxpayer’s name, address, and Social Security number. PROMIS generates reports on cumulative descriptive program data, such as the number of cases worked in PRP and how quickly cases are closed. Although the system also captures data on the types of problems taxpayers are experiencing—such as the “lost or stolen refund” example mentioned earlier—there is no mechanism to search for other data that might help advocacy efforts by pointing to agencywide weaknesses. For instance, there is no way to determine if cases were caused by problems with a particular IRS system because the cases are coded only by type of tax problem. In a case history section, PROMIS captures information on the nature of the taxpayer’s problem and what actions were taken to help the taxpayer. However, there is no mechanism to search multiple cases for trend data from the history section. The Customer Feedback System is designed to capture taxpayers’ compliments and complaints about IRS employees. The system depends on taxpayers to take the initiative to voluntarily comment about the treatment they received from an IRS employee and on IRS managers to complete the customer feedback form. The voluntary nature of the system means that the data are not statistically representative of program participation, and fluctuations in the data cannot be attributed to changes in program operations. In addition, comments captured on the system could relate to any IRS function, not just PRP, and therefore, are of limited use in assessing PRP’s effectiveness. The PRP Case Identification and Tracking System is used to capture information on the PRP case identification measure discussed earlier. As such, it has the same limitations as that measure—it only has information on cases coming into IRS through correspondence at the service centers. Because this system contains no information on cases coming in through district offices and call sites, it provides incomplete data on whether taxpayers who qualify for PRP assistance are being properly identified and referred to PRP. In addition to the problems with the Taxpayer Advocate Management Information System, we mentioned earlier that the Advocate’s Office lacked a system to track resources dedicated to the program. Because PRP was implemented through IRS’ functions, the Advocate’s Office had no system to track the resources devoted to PRP. Without this basic program information, the Advocate’s Office had no means to determine what it invested in the program. Conclusions The Advocate’s Office can provide a valuable service by helping (1) taxpayers who have been unable to resolve their problems elsewhere in IRS and (2) taxpayers who are suffering significant hardships. We have identified challenges, obstacles, and deficiencies in Advocate Office and PRP operations that could affect how efficiently and effectively services are provided to taxpayers. The Advocate’s Office is in the midst of identifying and implementing changes designed to improve its operations. Many of the changes, such as restructuring Advocate Office operations and creating career paths for local advocates, are due to requirements of the IRS Restructuring and Reform Act of 1998; other changes, such as developing position descriptions for PRP caseworkers, are the result of Advocate Office initiatives. However, it is too soon to tell how effective these changes will be in addressing the challenges cited in this report. Two areas in which changes are being considered are advocacy and performance measures. However, changes in both areas require the development of better information systems than are currently available. For example, without a system or systems that provide (1) information needed to identify and prioritize advocacy projects and (2) comprehensive information on all proposed, ongoing, and completed advocacy projects, IRS has no assurance that the Advocate’s Office is most effectively using the resources available for advocacy. Similarly, without a system or systems that provide better data than are now available in the Taxpayer Advocate Management Information System, IRS’ ability to develop appropriate measures of PRP effectiveness may be hampered. Recommendations to the Commissioner of Internal Revenue To better ensure that the Advocate’s Office effectively uses the resources available for advocacy and thus enhances its ability to prevent the recurrence of taxpayer problems and ultimately reduce the number of taxpayers who need help from PRP, we recommend that the Commissioner of Internal Revenue direct appropriate officials to define the requirements for and to develop a system that captures the kind of information needed to identify and prioritize potential advocacy projects and provide feedback to staff on ongoing and completed projects. To better manage PRP resources and improve operations, we recommend that the Commissioner of Internal Revenue direct appropriate officials to design management information systems that can support outcome- oriented performance measures. Agency Comments and Our Evaluation The Commissioner of Internal Revenue commented on a draft of this report by letter dated July 7, 1999, in which he generally agreed with our findings and concurred with our recommendations. (See app. VII for a copy of the letter.) We modified the report to ensure technical correctness and include updated information where appropriate. We are sending copies of this report to Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means; Representative Amo Houghton, Chairman, and Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; and Senator William V. Roth, Jr., Chairman, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance. We are also sending copies to The Honorable Lawrence H. Summers, Secretary of the Treasury; The Honorable Charles O. Rossotti, Commissioner of Internal Revenue; The Honorable Jacob J. Lew, Director, Office of Management and Budget; and other interested parties. Copies of this report will be made available to others upon request. If you have any questions regarding this letter, please contact me or David Attianese at (202) 512-9110. Key contributors to this assignment were Kelsey Bright, Isidro Gomez, and Susan Malone. Detailed Scope and Methodology Offices Visited We interviewed agency officials at the Internal Revenue Service’s (IRS) National Office, all 4 IRS regional offices, the Executive Office for Service Center Operations (EOSCO), and 17 local offices, including 9 district offices, 4 former district offices, and 4 service centers (see table I.1 for a list of the regional and local offices we visited). Midstates Region (Dallas, TX) Northeast Region (New York, NY) Southeast Region (Atlanta, GA) Western Region (San Francisco, CA) Delaware-Maryland (Baltimore, MD) Central California (San Jose, CA) Kentucky-Tennessee (Nashville, TN) Manhattan (New York, NY) Northern California (Oakland, CA) North Texas (Dallas TX) Pacific Northwest (Seattle, WA) Pennsylvania (Philadelphia, PA) Virginia-West Virginia (Richmond, VA) August, ME (part of the New England District) Pittsburgh, PA (part of the Pennsylvania District) Portland, OR (part of the Pacific Northwest District) Sacramento, CA (part of the Northern California District) We selected the 17 local offices on the basis of suggestions from Advocate Office staff; our stratification of offices to obtain a variety by size, type of work, and organization; and geographic convenience. At the National Office, we interviewed the National Taxpayer Advocate, his predecessor, and members of his staff. We interviewed the EOSCO advocate; and, at the regional and local offices, we interviewed the advocates and their staffs. Additionally, at the local offices, we interviewed Problem Resolution Program (PRP) coordinators and PRP caseworkers. We also attended two regional advocacy council meetings and discussed PRP operations with council members. Program Information Reviewed We reviewed documents, including sections of the Internal Revenue Manual pertaining to the Advocate’s Office and PRP; IRS Internal Audit reports on the Advocate’s Office and PRP; and National Office, regional, district, and service center program documents. We reviewed program data on advocacy efforts at IRS. These data included Internal Revenue Manual instructions on advocacy and databases on ongoing and completed advocacy projects. We interviewed Advocate Office staff responsible for advocacy and local advocates and their staffs to determine how advocacy projects are identified and implemented. We also attended two Regional Advocacy Council meetings to increase our awareness of program operations. We did not assess IRS’ effectiveness in implementing proposed advocacy projects because IRS Internal Audit had an ongoing assignment with that specific objective. We reviewed program management information for the Advocate’s Office and PRP. This information included program goals and measures and the systems by which these data are captured. Data for the Advocate’s Office and PRP are captured on the Taxpayer Advocate Management Information System, which includes three separate systems—the Problem Resolution Office Management Information System, the PRP Case Identification and Tracking System, and the Customer Feedback System. Staffing Surveys Through our surveys and with the help of the Advocate’s Office, we obtained staffing information from all IRS locations where Advocate Office and PRP work was being done. We attempted to get information for all IRS staff doing Advocate Office or PRP work, including PRP caseworkers, as of June 1, 1998. (See apps. II and III for summaries of the responses to our surveys.) When necessary, we verified responses to the staffing surveys by telephone. However, we did not verify that we received responses for all staff doing Advocate Office or PRP work. The results of our surveys were limited to a specific time (June 1, 1998), and responses may vary based on how staff interpreted our questions. Summary of Responses to the Taxpayer Advocate Staffing Survey This appendix contains a summary of responses to the survey we sent to the National Taxpayer Advocate, the four regional advocates, the advocates at EOSCO and the Office of the Assistant Commissioner (International), and the 43 local advocates. In that survey, we asked for information on each person on their staffs as of June 1, 1998, including the advocate or associate advocate, where appropriate. We received responses for 508 staff, including staff detailed to advocate offices by the operating functions. Survey Questions and Responses 1. Series? (Provide the position series number.) Percentage of staff 13.0 6.3 0.2 7.1 0.2 1.2 6.7 39.6 3.9 0.2 0.2 0.2 0.4 0.2 0.8 7.7 6.9 1.0 0.2 1.6 2.6 100.2 "Series" means a number identifying a recognized occupation in the federal service that includes all jobs at the various skill levels in a particular kind of work. 2. Grade? (Provide the person’s current grade level.) Percentage of staff 5.3 5.1 7.5 8.3 2.0 18.3 0.2 18.7 15.2 8.9 8.3 2.2 0.2 100.2 "Grade" means the level of classification an employee has under a position classification system (i.e., referring to the duties, tasks, and functions he or she performs). 3. Permanent or detailed? (Indicate whether the person is assigned to the Taxpayer Advocate’s Office on a permanent or detailed basis.) Assignment basis Permanent Detailed OtherNo response Total "Other" includes interim and temporary assignments. 4. Full-time or part-time? (Indicate whether the person is a full-time or part-time IRS employee.) 5. Percent of time spent on Advocate Office work? (Estimate the percentage of time the person does work related to the Advocate’s Office, including Problem Solving Day cases. Response should be 100 percent, unless the person also does work for another office or function.) Percentage of time spent on Advocate Office work1-24 25-49 50-74 75-99 100 Total "Advocate Office work" includes work related to PRP. 6. Years at IRS? (Provide the number of years the person has worked at IRS. Do not include other government experience.) 7. Years in PRP? (Provide the number of years the person has done PRP work.) 8. Prior IRS function(s)? (Check all that apply.) (Indicate the IRS operating function(s) where the person worked before coming to the Taxpayer Advocate’s Office.) 9. How acquired by the Advocate’s Office? (Check one.) (Indicate how the person obtained a position in the Advocate’s Office.) How position obtained Competed Volunteered Assigned Detailed OtherNo response Total "Other" includes methods not in the categories above, such as reassignments resulting from IRS’ reorganization and hardship transfers. 10. Grade when entered PRP? (Provide the person’s grade level when he or she entered PRP.) 11. Report to? (Provide the title and home function of the individual to whom the person reports.) Employee reports to IRS managementAdvocate Office managementFunctional managementNo response Total "IRS management" means the head of an IRS office or his or her designee. 12. Evaluated by? (Provide the title and home function of the individual who evaluates the person’s performance.) Employee evaluated by IRS managementAdvocate Office managementFunctional managementNo response Total "IRS management" means the head of an IRS office or his or her designee. Cases sent to IRS by the Senate Finance Committee. "Other" includes types of work not in the categories above, such as program management and program support. 14. Training completed? (Check all that apply.) (Indicate the type of training that the person has completed for his or her current position.) Percentage of staff 35.2 44.5 41.5 48.4 "PRP course for current position" includes training for the positions of advocate, analyst, and PRP specialist. Standards for doing PRP casework. Annual training to update Advocate Office and PRP staff on current issues and new laws affecting PRP. "Intelligent Query" is a software package for generating specialized reports using the PROMIS database. Summary of Responses to the Functional PRP Staffing Survey This appendix contains a summary of responses to the survey we sent to the 43 local advocates and the advocate at the Office of the Assistant Commissioner (International). In that survey, we asked for information on each district office and service center employee assigned to functional PRP work as of June 1, 1998. We received responses for 2,215 staff—1,018 district office staff and 1,197 service center staff. Survey Questions and Responses 1. PRP role(s): (Check all that apply.) (Indicate each person’s role(s) in PRP.) Percent 5.9 12.7 67.3 15.7 2.1 "Other" includes PRP roles not in the categories above, such as clerical support. 2. Series? (Provide the position series number.) Number Percent Number Percent 0.1 4.6 0.1 0.7 0.1 0.0 0.6 0.1 0.1 1.1 0.1 1.0 9.3 1.4 0.8 0.9 0.1 0.0 0.0 0.0 78.3 0.2 0.0 0.0 0.1 0.0 0.0 0.1 0.0 0.5 100.3 15 11 0 0 27 1 0 1 4 8 2 2 9 130 0 314 3 1 1 5 203 2 9 105 0 2 2 123 1 37 1,018 1.5 1.1 0.0 0.0 2.7 0.1 0.0 0.1 0.4 0.8 0.2 0.2 0.9 12.8 0.0 30.8 0.3 0.1 0.1 0.5 19.9 0.2 0.9 10.3 0.0 0.2 0.2 12.1 0.1 3.6 100.1 1 55 1 8 1 0 7 1 1 13 1 12 111 17 9 11 1 0 0 0 937 2 0 0 1 0 0 1 0 6 1,197 "Series" means a number identifying a recognized occupation in the federal service that includes all jobs at the various skill levels in a particular kind of work. "Occupation" means an occupational series listed in the Handbook of Occupational Groups and Families developed by the U.S. Office of Personnel Management to aid federal agencies in classifying positions under the Classification Act of 1949 and P.L. 92-392. 3. Grade? (Provide the person’s current grade level.) Percent 0.1 5.8 0.9 6.3 53.9 17.3 6.5 5.8 0.7 1.7 0.4 0.1 0.0 0.6 100.1 "Grade" means the level of classification an employee has under a position classification system (i.e., referring to the duties, tasks, and functions he or she performs). 4. Permanent or detailed? (Indicate whether the person is assigned to PRP work on a permanent or detailed basis.) Percent 93.6 4.8 0.2 1.4 100.0 "Other" includes temporary and seasonal assignments. 5. Full-time or part-time? (Indicate whether the person is a full-time or part-time IRS employee.) 6. Percent of time on PRP work? (Estimate the percentage of time the person does work related to PRP, including Problem Solving Day cases. Response should be 100 percent, unless the person also works outside of PRP.) 7. Years at IRS? (Provide the number of years the person has worked at IRS. Do not include other government experience.) 8. Years in PRP? (Provide the number of years the person has done PRP work.) 9. Position funded by? (Check one.) (Indicate the function that funds the person’s position.) 37 317 12 21 15 1,018 3.6 31.1 1.2 2.1 1.5 100.0 1 244 0 346 11 1,197 0.1 20.4 0.0 28.9 0.9 100.1 "Other" includes IRS functions not in the categories above. It also includes functions found only in service centers, such as Accounting, Taxpayer Relations, Adjustments, and Returns Processing. 10. How acquired by PRP? (Check one.) (Indicate how the person obtained a functional position in PRP.) How position obtained Competed Volunteered Assigned Detailed OtherNo response Total “Other” includes methods not in the categories above, such as redeployment and assigned “as needed.” 11. Grade when entered PRP? (Provide the person’s grade level when he or she entered PRP.) GradePercent 0.4 General Service-3 5.3 General Service-4 3.1 General Service-5 13.1 General Service-6 49.5 General Service-7 10.9 General Service-8 5.3 General Service-9 3.9 General Service-10 0.4 General Service-11 1.5 General Service-12 0.3 General Service-13 0.0 General Service-14 0.0 General Service-15 6.3 No response Total 100.0 "Grade" means the level of classification an employee has under a position classification system (i.e., referring to the duties, tasks, and functions he or she performs). 12. Report to? (Provide the title and home function of the individual to whom the person reports.) Percent 6.9 92.2 0.8 99.9 "Functional management" means the head of an IRS division, function, or functional unit, or his or her designee. 13. Evaluated by? (Provide the title and home function of the individual who evaluates the person’s performance.) Percent 6.9 92.1 0.9 99.9 "Advocate Office management" means the head of an Advocate office or his or her designee. "Functional management" means the head of an IRS division, function, or functional unit, or his or her designee. Requests for relief from hardship. Cases sent to IRS by the Senate Finance Committee. "Other" includes types of work not in the categories above, such as functional work not related to PRP activities. 15. Training completed? (Check all that apply.) (Indicate the type of training that the person has completed for his or her current position.) Percent 39.3 72.2 74.0 22.4 “PRP course for current position” includes training for the positions of PRP manager, coordinator, and caseworker. Standards for doing PRP casework. Annual training to update PRP staff on current issues and new laws affecting PRP. “Intelligent Query” is a software package for generating specialized reports using the PROMIS database. Selected Survey Results for PRP Caseworkers Following are five tables with selected results from our survey of district office and service center functional employees assigned to PRP work as of June 1, 1998. The results in this appendix are for the 726 district office and 806 service center staff who were identified as PRP caseworkers in the responses to our functional PRP staffing survey. Percent 19.3 80.2 0.5 100.0 "Advocate Office management" means the head of an Advocate office or his or her designee. "Functional management" means the head of an IRS division, function, or functional unit, or his or her designee. Table IV.3: Evaluation of Functional PRP Caseworkers Percent 18.0 81.1 0.8 99.9 "Advocate Office management" means the head of an Advocate office or his or her designee. "Functional management" means the head of an IRS division, function, or functional unit, or his or her designee. Table IV.4: Training Completed by Functional PRP Caseworkers Standards for doing PRP casework. Annual training to update Advocate Office and PRP staff on current issues and new laws affecting PRP. "Intelligent Query" is a software package used to generate specialized reports from the PROMIS database. Table IV.5: Average Percentage of Time Spent by Functional PRP Caseworkers on Specific Types of Work Requests for relief from hardship. Cases sent to IRS by the Senate Finance Committee. "Other" includes types of work not in the categories above, such as functional work not related to PRP activities. Factors That Have Increased and Could Increase PRP Workload Factors that have increased and could increase PRP workload include PRP criteria that can be and have been broadly interpreted to include any situation; IRS initiatives, such as Problem Solving Days, Citizen Advocacy Panels, and the introduction of a PRP toll-free telephone number; and a legislative requirement designed to increase public awareness of advocate operations. Broad Interpretation of PRP Criteria PRP cases can be generated when a taxpayer contacts a local advocate with a problem or when a front-line IRS employee, such as a customer service representative, revenue officer, or revenue agent, determines that a situation should be referred to the Advocate’s Office. The Internal Revenue Manual contains the following criteria for determining whether a situation qualifies as a PRP case: any contact on the same issue at least 30 days after an initial inquiry or any contact that indicates the taxpayer has not received a response from IRS by the date promised; and any contact that indicates regular methods have failed to resolve the taxpayer’s problem, or when it is in the best interest of the taxpayer or IRS that the case be worked in PRP. Officials in the Advocate’s Office said that the way PRP criteria are interpreted had increased PRP’s workload because the portion of the third criterion that reads “in the best interest of the taxpayer or IRS that the case be worked in PRP” can be interpreted so that any case qualifies as a PRP case. In that regard, the National Taxpayer Advocate said that he was committed to work any case for which a taxpayer was seeking help from PRP. In commenting on our draft report, the Commissioner of Internal Revenue stated that the PRP criteria had recently been modified. IRS has added the following four criteria: The taxpayer is suffering or is about to suffer a significant hardship; the taxpayer is facing an immediate threat of adverse action; the taxpayer will incur significant costs if relief is not granted (including fees for professional representation); and the taxpayer will suffer irreparable injury, or long-term adverse impact if relief is not granted. Additionally, IRS deleted the portion of the third criterion that read, “in the best interest of the taxpayer or IRS that the case be worked in PRP.” Because we received the information on the modified criteria as part of the agency comment letter, we did not have time to evaluate the potential impact that the modified criteria might have on the PRP workload. IRS Initiatives Place Demands on Program Resources IRS officials said that part of the PRP workload increase can be attributed to an IRS initiative known as Problem Solving Days, which is the responsibility of the National Taxpayer Advocate. Continuation of that initiative and the recent start of two other IRS initiatives—Citizen Advocacy Panels and publication of a unique toll-free telephone number for taxpayers to call the Advocate’s Office—could place increasing demands on PRP resources. Problem Solving Days In November 1997, IRS began holding a series of monthly Problem Solving Days in each of its 33 districts. The purpose of these days is to give taxpayers with unresolved tax problems the opportunity to meet face to face with IRS staff in an effort to resolve those problems. These days have been advertised both locally and nationally through newspaper articles, television and radio interviews with IRS officials, and public service announcements. From November 1997 to November 1998, over 36,000 PRP cases were closed as a result of IRS’ Problem Solving Days. According to local advocates, Problem Solving Days have not only provided taxpayers with in-person service but also allowed IRS staff to meet with taxpayers and help solve problems. However, the local advocates also said that the work involved with planning and executing these days and the subsequent increase in casework was taking its toll on Advocate and PRP staff. Citizen Advocacy Panels In addition to their other duties, some local advocates are responsible for implementing Citizen Advocacy Panels within their districts. Collectively, the panels are designed to serve as advisory bodies to the Secretary of the Treasury and the Commissioner of Internal Revenue to improve IRS service and responsiveness. The panels are chartered to (1) provide citizen input into improving IRS customer service by identifying problems and making recommendations for improving IRS systems and procedures, (2) identify and elevate problems to appropriate IRS officials and monitor progress to effect change, and (3) refer taxpayers to the appropriate IRS office for assistance in resolving their tax problems. Membership on the panels is to include the local advocate and 8 to 15 citizens from the district. The South Florida District held the first public meeting of a Citizen Advocacy Panel in November 1998. Three more districts—Brooklyn, Midwest, and Pacific Northwest—have established panels and plan to hold public meetings during fiscal year 1999. Initially, IRS had planned to establish panels in each of its 33 districts. However, IRS is reevaluating this need in light of the agency’s planned reorganization. According to local advocates, the Citizen Advocacy Panels represent a significant time commitment for them. Not only are the local advocates members of the panels, but they are also responsible for the administrative duties associated with the panels, such as securing space and equipment for meetings. In addition to time commitments for the local advocates, taxpayers may be referred to PRP for further assistance, which could increase PRP’s workload. PRP Toll-free Telephone Number Local advocates said that the introduction of a toll-free telephone number for taxpayers to call the Advocate’s Office could increase PRP workloads. The number was operational as of November 1, 1998, and has been advertised in IRS publications, such as the tax year 1998 Form 1040 tax package. IRS has used customer service staff as PRP telephone assistors to answer the calls, and the assistors have been equipped with computer systems that allow them to help some taxpayers immediately. There were 241,228 calls placed on this line between November 1, 1998, and April 17, 1999; and, according to the Advocate, 85 percent of these calls were for non-Advocate Office matters. The Advocate said that the procedure for the PRP toll-free assistors is to help any caller if the assistor has the time and ability; and if the assistor cannot help, the assistor should transfer the caller to IRS’ general assistance phone lines. There is no way of determining whether taxpayers who were referred to local advocate offices through the toll-free line would have contacted IRS anyway or whether it was the availability of the new toll-free line that prompted them to contact PRP. Therefore, the actual increase in PRP cases, if any, cannot accurately be determined. However, local advocates were concerned that the toll-free line would dramatically increase PRP’s future caseload. They were also concerned that this toll-free line would be inundated with calls from taxpayers needing general assistance—calls that would be better handled by another toll-free line that IRS has available for that purpose. Legislative Requirement May Increase Demands on PRP A legislative requirement designed to increase public awareness of advocate operations may increase demands on PRP. The IRS Restructuring and Reform Act of 1998 required IRS to include the address and telephone number of local advocates on statutory notices of deficiency sent to taxpayers. IRS began sending taxpayers the revised notice in August 1998. The notices state that the taxpayers can contact their local advocate with their tax problem for “proper and prompt handling” if the problem is not resolved through normal IRS channels. IRS officials said that about 1 million statutory notices are sent out each year, and some portion of those taxpayers will probably be contacting the advocates, which will cause a corresponding increase in workloads. According to a local advocate, some taxpayers may have a legitimate reason to contact their local advocates. For example, a taxpayer may have repeatedly tried without success to rectify the problem addressed in the notice. Other taxpayers may contact their local advocates because the telephone number is made available. PRP Performance Measures and Information Systems Performance Measures At the time of our review, the Advocate’s Office used four measures to gauge PRP’s performance. They were the (1) average processing time to close PRP cases, (2) currency of PRP case inventory, (3) quality of casework, and (4) case identification and tracking rate. Table VI.I shows actual performance results for the four measures for fiscal years 1996 through 1998. Table VI.1: Performance Results for the Office of the Taxpayer Advocate (Fiscal years 1996-1998) Performance measure Average processing time (in days) Currency of case inventory (in days) Quality of casework (percentage of standards met)72.6 PRP case identification (percentage of PRP-eligible cases that were identified as PRP cases in IRS service centers)86.4 Data for this measure were not collected until fiscal year 1998. The first indicator, average processing time, represents the average number of days it took to close a PRP case. The measure does not include those cases that were opened and closed on the same day because these cases are not included in PRP’s inventory control system. The measure also does not include those cases in which the Advocate’s Office made a determination of hardship, which represent about 10 percent of the total PRP cases closed during fiscal year 1998. Hardship cases are not included in this measure because IRS requires that these cases be closed in 2 days; including these cases might misrepresent the actual average closure times for PRP cases. The second indicator, currency of case inventory, is designed to measure the average number of days that cases have been in the open PRP inventory. The third measure, expressed as a percentage, is designed to determine the quality of PRP casework. This measure is to be based on a statistically valid sample of PRP cases and provides the National Taxpayer Advocate with data on timeliness and the technical accuracy of PRP cases. Each month, sampled cases are to be sent to two locations—one for district office cases and one for service center cases—for review. Reviewers at these locations are to check the cases against a list of 13 quality standards, broken into 3 categories—timeliness, communication, and accuracy. Each of the 13 standards is worth a certain number of points totaling 100. Cases are to be reviewed to determine if, among other things, the caseworker contacted the taxpayer by a promised date, whether copies of any correspondence with the taxpayer appeared to communicate issues clearly, and whether the taxpayer’s problem appeared to be completely resolved. The caseworkers and local advocate staff we talked with said that the quality measure was helpful because the elements that are reviewed provide a checklist for working PRP cases. According to staff, this helps ensure that most cases are worked in a similar manner in accordance with standard elements. The fourth measure, PRP case identification, is used only at the service centers and attempts to determine if service center employees are properly identifying potential PRP cases from incoming correspondence. Service center employees responsible for sorting the mail are also responsible for identifying potential PRP cases. The measure is to be based on a sample of mail coming into each service center. Analysts at the service centers are to review each sampled piece of incoming mail, identify potential PRP cases, and return the mail to the workflow. After the mail has been sorted and sent to the various service center units for handling, the analysts are to check to see what percentage of the sampled mail was correctly identified for PRP. Information Systems The Taxpayer Advocate Management Information System is comprised of the Problem Resolution Office Management Information System (PROMIS), the Customer Feedback System, and the PRP Case Identification and Tracking System. PROMIS is a computerized inventory control and report system for PRP cases. Background information on PRP cases, such as the taxpayer’s name, address, tax identification number, and a code to identify the taxpayer’s problem are captured on the system. Additionally, in a case history section, the system captures a detailed description of the taxpayer’s problem, along with what actions were taken, and when, to help the taxpayer. This system produces standard reports for the Advocate’s Office and can be queried to produce other more specific reports, including reports for counts of information on any of its data fields, such as the number of cases opened or closed during a certain time period or at a certain location. The case history section cannot be queried as to what specific problems faced the taxpayers. All PRP cases, except those opened and closed in the same day, are to be entered into PROMIS. The Customer Feedback System was made necessary by the second Taxpayer Bill of Rights and is designed to capture taxpayers’ compliments and complaints about IRS employees and what actions, if any were taken regarding the cases. If a taxpayer calls or writes IRS concerning the behavior of an employee, IRS managers are responsible for recording information on a customer feedback form. Additionally, forms are to be filled out if a manager receives a complaint of employee behavior from another IRS employee. After the forms are filled out, the information is compiled and reports can be generated. Reports include identifying what characteristics are more frequently described in customer complaints— such as the IRS employee using discourteous, unprofessional language. The system also collects data on what, if any, disciplinary actions—such as counseling or suspension—were taken against IRS employees. The PRP Case Identification and Tracking System is the system used to capture information on the PRP case identification measure. Comments From the Internal Revenue Service The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the operations of the Internal Revenue Service's (IRS) Office of the National Taxpayer Advocate and the Problem Resolution Program (PRP) that it administers, focusing on: (1) challenges the Taxpayer Advocate faces in managing program resources; (2) the potential effects of workload fluctuations on program operations; (3) information available to help the Advocate determine the causes of taxpayer problems and prevent their recurrence; and (4) the adequacy of performance measures the IRS uses to gauge program effectiveness. GAO noted that: (1) there are various management and operational challenges facing IRS, the Advocate's Office, and PRP; (2) how these challenges are addressed could affect how efficiently and effectively taxpayers are helped by PRP; (3) the Advocate's Office faced resource management challenges because it lacked direct control over most PRP resources; (4) planned changes to the Advocate's Office and PRP, such as placing PRP resources under the control of the Advocate's Office, could mitigate some of these issues; (5) however, it is too early to evaluate the impact of these changes; (6) IRS faces challenges as it restructures the Advocate's Office to better handle variations in PRP's workload; (7) according to Advocate Office officials, in the past, because PRP operations depended on IRS functional units for resources, any fluctuations in PRP's workload were handled by adjusting the number of functional staff assigned to work PRP cases; (8) however, the Advocate's Office is moving away from a structure that relies on other IRS units for staffing, which may make it more difficult for the Advocate's Office to handle workload fluctuations, especially workload increases; (9) the Advocate told GAO that he was committed to helping any taxpayer who contacts the office; (10) while it is understandable why the Advocate's Office might not want to turn away anyone seeking help, accepting cases that could be handled elsewhere in IRS could overburden PRP; (11) the demands on the Advocate's Office to resolve individual taxpayer problems has left little time for staff to spend identifying the causes of recurring taxpayer problems and recommending solutions; (12) also, limitations in the kind of information available provided little assurance that the time being spent was being used most effectively; (13) these efforts on recurring problems, called advocacy, are key to the success of the Advocate's Office because the improvements they generate can reduce the number of taxpayers who ultimately require help from PRP; (14) IRS lacked adequate measures of the effectiveness of the Advocate's Office and PRP; (15) measures of effectiveness should cover the full range of Advocate Office operations so they can be used to improve program performance, increase accountability, and support decisionmaking; and (16) the set of measures used by the Advocate's Office during GAO's review focused on descriptive program events instead of program outcomes, did not provide complete data, or were not based on consistent data collection.
Background Although apprenticeship programs in the United States are largely private systems that are paid for largely by program sponsors, the National Apprenticeship Act of 1937 authorizes and directs the Secretary of Labor to formulate and promote labor standards that safeguard the welfare of apprentices. The responsibility for formulating and promoting these standards resides with OATELS. OATELS had a staff of about 176 full-time equivalencies and an annual appropriation of about $21 million in 2004. In addition, because of budgetary constraints, OATELS officials do not expect resources to increase. At the national level, OATELS can register and deregister apprenticeship programs (i.e., give or take away federal recognition), issue nationally recognized, portable certificates to individuals who have completed registered programs, plan appropriate outreach activities targeted to attract women and minorities, and promote new apprenticeship programs to meet workforce needs. In addition to this national role, OATELS directly oversees individual apprenticeship programs in 23 states. In these states, the director for the state’s apprenticeship system and other program staff are federal employees who monitor individual apprenticeship programs for quality and their provision of equal opportunity. Labor can give authority to states to oversee their own apprenticeship programs if the state meets certain requirements. Labor has given this authority to 27 states, the District of Columbia, and three territories. In these states, which we refer to as council-monitored, the federal government is not responsible for monitoring individual apprenticeship programs; instead, the state is. It does so through state apprenticeship councils. OATELS does, however, conduct two types of reviews to determine how well the state fulfills its responsibilities. Quality reviews determine, in part, conformance with prescribed federal requirements concerning state apprenticeship laws, state council composition, and program registration, cancellation and deregistration provisions. Equal Employment Opportunity (EEO) reviews assess the conformity of state EEO plans, affirmative action activities, record-keeping procedures, and other activities with federal EEO regulations. In addition to these reviews, OATELS may also provide state agencies with federal staff to assist in day- to-day operations. The number and type of construction apprenticeship programs are distributed differently across federally- and council-monitored states. Council-monitored states not only have more programs, but these programs are more likely to be jointly sponsored by employers and unions than sponsored by employers alone. On average, a construction apprenticeship program in federally-monitored states trains about 17 apprentices and in council-monitored states trains about 20. Beyond this average, it’s important to note that there can be great variation among programs, with some having over 400 participants and others 1 or 2. Figure 1 identifies states where programs are federally- and council- monitored. Both the federal and council-monitored states collect data on the individual programs they oversee. Labor maintains a large database called the Registered Apprenticeship Information System (RAIS) and collects information about individual programs, apprentices, and sponsors for apprenticeships in the 23 states where it has direct oversight and in 8 council-monitored states that have chosen to report into this system. The other council-monitored states, 20 in total, maintain their own data and collect various pieces of information on apprenticeship systems. Labor does collect aggregate data on apprentices and programs from these states. In all states, individuals can enter the construction trades without completing formal apprenticeship programs, but many construction workers, particularly those working in highly skilled occupations that require extensive training, such as the electrical, carpentry, and plumbing trades, receive their training though registered apprenticeship programs. To complete their programs, apprentices must meet requirements for on- the-job training and classroom instruction that must meet the minimum standards for the trade as recognized by Labor or the state apprenticeship council. Programs in some trades, for example, commercial electricity, may take 5 years to complete but programs to train laborers may only take a year. Beginning apprentices’ wages generally start at about 40 percent of the wage of someone certified in a particular trade and rise to about 90 percent of that wage near completion. Apprentices’ contracts with their program sponsors specify a schedule of wage increases. Labor’s Monitoring of Registered Apprenticeship Programs Is Limited Although OATELS is responsible for overseeing thousands of apprenticeship programs in the states where it has direct oversight, it reviews few of these programs each year. Also, while its apprenticeship database collects much information about individual participants and programs, Labor hasn’t used these data to systematically generate program performance indicators such as completion rates. As a result, it lacks information that would allow it to identify poorly performing programs and adjust its oversight accordingly. Furthermore, despite many technical upgrades, Labor’s database hasn’t provided information that meets the needs of federal apprenticeship directors or the needs of other stakeholders. Few Federal Staff Are Engaged in Monitoring the Programs That Labor Directly Oversees OATELS has reviewed very few of the apprenticeship programs in the states where it has direct oversight. Federal apprenticeship directors in these states reported they conducted 379 quality reviews in 2004, covering only about 4 percent of the programs under their watch. These reviews are done to determine, for example, whether sponsors have provided related instruction and on-the-job training hours in accordance with the standards for the program and whether wages reflected actual time in the program. The number of reviews conducted varied across states. On average, 22 quality reviews per state were conducted, but one director reported conducting as many as 67 reviews while another reported conducting no reviews at all. In addition, programs in council-monitored states were almost twice as likely as programs in federally-monitored states to have been reviewed within 3 years. (See fig. 2.) Several federal officials said over the past several years they had placed primary emphasis on registering new programs and recruiting more apprentices, particularly in nontraditional areas such as childcare and health. In addition, they told us it was not possible to do more reviews in part because of limited staff. In addition to having fewer reviews, apprenticeships in federally- monitored states had fewer staff dedicated to monitoring activities than council-monitored states. In 2004, each staff person in a federally monitored state was responsible, on average, for about 2,000 apprentices, according to federal program directors; to put this in context, case loads of monitors in federally-monitored states were almost twice as large as those in council-monitored states. In federally-monitored states, on average there were about 2.5 staff to monitor programs, less than one-third the average in council-monitored states. Labor’s practice of assigning federal staff to monitor programs in 18 of the council-monitored states rather than to programs in federally-monitored states compounded differences in staff resources. Directors in council-monitored states reported that at least two federal employees, on average, monitored programs in their jurisdiction. As important as the number of staff, is how they spent their time. About a half of the staff in federally-monitored states spent 40 percent or more of their time in the field performing monitoring, oversight, and providing related technical assistance, according to federal program directors whereas one-half of the staff in council-monitored states spent about 70 percent or more in the field. While Labor Collects Much Information about Apprenticeship Programs, It Does Not Systematically Use Data to Focus Its Oversight Although Labor collects information to compute completion rates and track participants who do not complete programs in the time expected, it does not use these data to focus its oversight efforts on programs with poor performance. During a site visit in a federally-monitored state, a monitor showed us how she computed cancellation rates by hand for apprentices in programs that she felt were not doing an adequate job of training apprentices to see if her hypotheses were correct. In the absence of performance information, directors and staff in federally-monitored states reported that a variety of factors dictated which programs to review. These included size, newness, location, date of the last review, sponsor’s cooperativeness, as well as the location of staff resources. In addition to not using program data to target reviews, Labor has not collected and consistently entered into its database information about why apprentices cancel out of programs, although its database was designed to include such information and having it could help target reviews. Officials told us that voluntary cancellation or transfers to another program were at times associated with program quality, while other nonvoluntary reasons, such as illness or military service, were not. Currently, recording the reason for an apprentice’s cancellation in the database is an optional field. We found that no reason was recorded for 60 percent of the cancellations and the remaining 40 percent did not effectively capture the reasons for leaving. Of the 18 reasons entered, the most common reasons were “Unknown,” “Voluntarily Quit,” “Unsatisfactory Performance,” “Discharged/Released,” and “Cancelled with the Occupation,” some of which did not provide useful information to target reviews. Also, other entries were close duplicates of one another, such as “left for related employment” and “left for other employment.” Labor also treats as optional data entries for its equal employment opportunity reviews: including the date of the last review, compliance status, status of corrective actions, and other information that would improve the efficiency of managing reviews. As a result, such data were available for about 5 percent of programs in Labor’s database in fiscal year 2004. Without this information, it is more difficult to determine when programs had their last EEO review and to readily identify programs with known problems. Labor’s Data Base Does Not Meet the Needs of Apprenticeship Directors and Other Stakeholders Despite many technical upgrades, Labor’s database hasn’t provided information that meets the needs of its federal directors or the needs of other stakeholders. While acknowledging that Labor’s database has been updated and improved, 22 out of the 23 directors of apprenticeship programs and their monitoring staff have expressed dissatisfaction with the present system. One complained of “daily” changes to the data fields without being informed of “why or when they will change.” Expressing the desire to select and sort data on any field and generate unique reports in context with all available data, another concluded, “In short, we need a lot of flexibility with database reports that we don’t have at this time.” Many federal apprenticeship directors made recommendations for improving the database. In general, what state directors wanted most was a system that was stable, user friendly, and that would allow them to produce customized reports to better oversee the apprenticeship programs in their states. The list below shows those potential improvements endorsed by more than half of the state apprenticeship directors: Increase the timeliness of notifications to state and regional offices for changes to RAIS (e.g., provide for more frequent communication), (22 of 23 surveyed states). Simplify instruction and procedures for producing reports (18 of 23 surveyed states). Allow production of customized state and regional reports by type of industry (18 of 23 surveyed states). Allow production of customized state and regional reports by sponsor type (17 of 23) and occupational type (17 of 23 surveyed states). Improve the frequency of RAIS training (17 of 23 surveyed states). Improve the quality of RAIS training (16 of 23 surveyed states). Simplify instructions and procedures for inputting and updating data (16 of 23 surveyed states). Increase available coding options to explain why apprentices leave the program (14 of 23 surveyed states). Allow production of customized state and regional reports by sex of apprentice and race of apprentice (14 of 23 surveyed states). OATELS has recently purchased software that enables users to extract data from Labor’s databases in order to produce customized reports. Purchased originally for the Secretary of Labor’s use, Labor Information Technology and OATELS officials said they foresaw the software’s utility for many programs and therefore decided to purchase licenses for apprenticeship field staff. However, OATELS has not necessarily taken steps to ensure field staff will be able to make optimal use of the software. About half the directors in federally-monitored states did not know the software was available or what it was. Although the software was demonstrated at a directors’ meeting in 2004, several couldn’t recall the demonstration and others were not in attendance. Moreover, two of the directors lacked basic hardware, such as a high-speed cable needed to support the software. In fact, one director told us he was working from his home because his office didn’t have such basics as a cable hook-up for his computer. Even if such obstacles are surmounted, the new system may not meet the staffs’ data needs. Two directors who were already attempting to use the software reported to us that it did not allow them to select information using factors that would be most useful to them, such as state- level data on apprenticeship programs. In addition, Labor could or would not supply us with formal documentation describing its plans to implement the software or its vision of how the software would be used by its staff. Labor also reported that because of budget constraints and the easy use of the new software, it had no plans to provide training. Without such plans, Labor’s commitment to the full implementation and future financing of the program is questionable. Labor Has Reviewed Council-Monitored States Infrequently, Provided Little Feedback, and Not Collected Data That Would Allow for a National Picture of Apprenticeships Labor has infrequently reviewed states to which it has delegated oversight responsibility. This includes both quality reviews and EEO reviews to assure that these states are in compliance with federal rules for overseeing apprenticeship programs and also adhering to equal employment opportunity requirements. Moreover, states that have been reviewed in recent years reported that they had little utility for helping them manage their programs, in part, because of the little feedback they received. In terms of providing information to Congress and others, Labor does not collect from these states information that is readily available on apprenticeships by occupation or industry, even for occupations where shortages of skilled workers are anticipated. Labor Has Reviewed Council-Monitored States Infrequently in Recent Years Agency records indicate that Labor conducted only three quality and EEO reviews of council-monitored states in calendar years 2002 and 2003, and none in 2004 but has scheduled seven for 2005. State apprenticeship directors confirmed that reviews are infrequent. Twelve of the 27 directors in council-monitored states reported that OATELS had conducted reviews of their programs less frequently than once every 3 years and several responded that reviews had not taken place in the last 9 to 12 years. An additional five directors reported their states had never been reviewed or that they were unaware if such reviews had taken place. The remaining 10 reported reviews took place in their states at least once every 3 years. (See fig. 3.) While neither statute nor regulation specifies the frequency with which OATELS should conduct such reviews, they constitute an important mechanism for ensuring that state laws conform to requirements necessary for Labor’s recognition of a state’s registered apprenticeship program. Officials in Most Council- Monitored States Reported Reviews Were Not Very Useful, in Part Because of Limited Feedback State directors reported that the Quality Reviews and the EEO Reviews had limited utility for helping them manage their programs. For example, only about half of them reported that the quality reviews were at least moderately useful for helping them determine their compliance with federal regulation. (See fig. 4.) Results were similar for the EEO reviews. (See fig. 5.) For example, slightly less than half of state directors reported that EEO reviews were at least moderately useful in helping them determine their compliance with federal EEO regulations. Some directors said reviews would be more useful if they focused on reviewing program- related activities in the state. Eight of the directors suggested that Labor focus more on state and local conditions and the performance of apprenticeship programs instead of focusing only on whether council- monitored states comply with federal standards. For example, one director reported the feedback he received on EEO activities was unrelated to the racial composition of the state. Also, some suggested reviews could provide opportunities for federal officials to provide assistance and share knowledge about strategies that other states have found useful. While directors had a number of ideas for improving the usefulness of quality and EEO reviews, many noted that Labor provided limited or no feedback as part of the review process. For example, one said his state agency received a brief letter from Labor stating only that the state was in compliance with federal regulations. Two others said their agencies received no documentation that a review had in fact been conducted, even though in one of these cases the state had made requests for the review findings. Officials in one state said feedback from their last review was positive and indicated no problems, but a few years later, OATELS took steps to get their state apprenticeship council derecognized with no prior notice or subsequent review. Labor Has Not Collected Data That Would Allow for a National Picture of Apprenticeships Labor collects aggregate counts of apprentices for most council-monitored states and has not developed strategies to collect more detailed information that would allow for a description of apprenticeships at the national level, even for those where shortages of skilled workers are anticipated. Of the 28 council-monitored states, 20 have their own data system and do not report data to Labor’s apprenticeship database. These 20 states represent about 68 percent of the nation’s apprentices. Labor and council-monitored states have differing opinions about why there are separate data systems. Labor officials told us that, as they were developing their database, they conducted outreach to council-monitored states. Officials from these states say otherwise. They also said that participating in Labor’s database would be an onerous process or that Labor’s system did not meet their state’s information needs and, therefore, they had invested the time and money to develop their own systems. Because many of these systems are not compatible with Labor’s, the agency collects only total counts of apprentices and programs from these 20 states, which it uses for its official reports. While incompatible data systems may suggest that it would be difficult or costly to obtain more than aggregate counts, in collecting data for this report, we found many of the council-monitored states—including 10 with large numbers of apprentices—were both willing and capable of providing us data on apprentices by industry and by occupation as well as information on completion rates, completion times, and some wage data for occupations that we had specified. In fact, one state reported that it had designed its apprenticeship database to collect all information required by Labor’s database and had offered to report these data to Labor electronically—but Labor had not taken steps to accept this offer. Nevertheless, as one director pointed out, having a unified data picture is central to OATELS’ oversight as well as its promotional activities and, as many agree, such a system would promote the health of the registered apprenticeship system. Construction Apprenticeship Completion Rates and Wages Vary by Program Sponsor Construction apprentices in programs sponsored jointly by employers and unions (joint programs) generally completed at a higher rate and in greater numbers than those enrolled in programs sponsored by employers alone (non-joint programs). More importantly, despite growth in construction program enrollment, there has been a decline over time in completion rates for both types of programs. Completion rates declined from 59 percent for apprentices enrolling in 1994 to 37 percent for apprentices enrolling in 1998. It is difficult to know what factors underlie this trend because, as noted earlier, Labor does not systematically record information about why apprentices leave programs. Apprentices who completed programs within 6 years tended to finish earlier than expected. In addition, wages for joint apprentices were generally higher at the start and upon completion of their programs. Data received from 10 council- monitored states that do not report to Labor’s database generally mirrored these findings. Nearly Half of Apprentices in Joint Programs Completed Their Apprenticeships Compared with about a Third in Non- joint Programs Completion rates were generally higher for apprentices in joint programs than for those in non-joint programs. Of the apprentices who entered programs between 1994 and 1998, about 47 percent of apprentices in joint programs and 30 percent of apprentices in non-joint programs completed their apprenticeships by 2004. For five consecutive classes (1994-1998) of apprentices in Labor’s database, completion rates calculated after 6 years, were higher for joint programs, as shown in figure 6. The data we received from 10 additional states that do not report into Labor’s database showed similar trends, with joint apprentices having higher completion rates. For complete data that we received from these 10 states, see appendix II. For the programs in Labor’s database, this higher completion rate for joint apprenticeship programs was true for all but 1 of the 15 largest individual trades which collectively account for 93 percent of active apprentices in construction. (See fig. 7.) It should be noted that among the trades, themselves, there were substantial variations in completion rates, often due to the nature of work environment and other constraints, according to federal and state officials. For example, roofing programs, which have low completion rates, face unpredictable weather and seasonal work flows. Officials said that joint programs have higher completion rates because they are more established and better funded. For some joint programs, these additional resources stem in part from union members paying a small portion of their paychecks into a general training fund that is used to help defray some of the training costs for apprentices. In addition, they suggested that, because unions tend to have a network of affiliates spread across an area, they are more likely to find work for participating apprentices in other areas when work is slow in a particular area. Local union chapters often have portability agreements with one another other, which help to facilitate such transfers. Officials also said these programs provide mentoring and other social supports. While Enrollments Increased, Completion Rates Declined in General for the Period Examined Enrollments in construction apprenticeship programs more than doubled from 1994 to 1998, increasing from 20,670 construction apprentices to 47,487. (See fig. 8.) Meanwhile, completion rates declined from 59 percent for the class of 1994 to 37 percent for the class of 1998. This decline for these cohorts held for both joint and non-joint programs. (See fig. 9.) Completion rates for joint apprentices dropped from nearly 63 percent to 42 percent, and from 46 percent to 26 percent for non-joint apprentices. This trend was consistent across different occupations as well, with most experiencing declines. Because Labor does not systematically record the explanations that apprentices offer for canceling out of programs, it is difficult to determine what may lie behind this downward trend. Labor suggested that some apprentices may choose to acquire just enough training to make them marketable in the construction industry in lieu of completing a program and achieving journey status. While we cannot confirm this hypothesis, we did find that those apprentices who did cancel chose to do so after receiving over a year of training. Joint apprentices cancelled after 92 weeks on average and non-joint apprentices cancelled after 85 weeks on average. Other reasons offered included a decline in work ethic, the emphasis placed by high schools on preparing students for college and the corresponding under-emphasis on preparation for the trades, and a lack of work in the construction industry. We cannot verify the extent to which unemployment played a role influencing outcomes, but, according to the Bureau of Labor Statistics, the unemployment rate for construction increased overall from 6.2 percent to 8.4 percent between 2000 to 2004, despite the predictions of future worker shortages in construction. Apprentices in Both Joint and Non-joint Construction Programs Tended to Complete Their Programs Early Those apprentices who completed construction programs within 6 years tended to finish earlier than they were expected to, with apprentices in non-joint programs finishing a bit sooner than their joint counterparts. On average, joint apprentices completed their programs 12 weeks early and non-joint apprentices completed 35 weeks early. This trend was similar across the largest trades in terms of enrollment as shown in table 1 below. This may be due to the willingness of program sponsors to grant apprentices credit for previous work or classroom experience that was directly related to their apprenticeship requirements. Starting Wages and Wages upon Completion in Joint Construction Programs Were Higher on Average than Those for Apprentices in Non-joint Construction Programs Apprentices in joint construction programs were paid higher wages at the start of their apprenticeships and were scheduled to receive higher wages upon completion of their programs. In 2004, the first year in which Labor collected information on starting wages, apprentices in joint programs earned $12.28 per hour while non-joint apprentices earned $9.90 at the start of their apprenticeships. These differences in wages were more pronounced at the journey level, that is, upon completion, with apprentices in joint programs scheduled to earn journey-level wages of $24.19 as compared with $17.85 for those in non-joint programs. As shown in figure 10, joint apprentices generally earned higher wages across the 15 trades with the largest numbers of construction apprentices. There were three trades—carpenter, structural steel worker, and cement mason—for which starting wages were higher for non-joint apprentices. For journey-level wages there was only one trade for which wages were higher for non-joint apprentices—that of millwright. Officials we spoke with commonly attributed this distinction in wages to the bargaining process associated with joint programs. Data from the 10 additional states (outside Labor’s database) whose data we examined showed a similar pattern—with joint apprentices earning higher wages. (See app. II.) Conclusions As a small program with finite resources tasked with an important mission, it is incumbent on Labor’s Apprenticeship Office to leverage the tools at its disposal to carry out its oversight, all the more so during a period of tight budgets. Labor’s responsibility for assuring that registered apprenticeship programs meet appropriate standards is no small charge, given the thousands of programs in operation today. In terms of the programs it directly monitors, Labor has not made optimal use of the information it collects to target resources. The failure to do so limits the agency’s ability to target its oversight activities to address and remedy areas where there may be significant need, particularly the construction trades where completion rates are declining. Underscoring this point is the fact that apprenticeship directors in federally-monitored states cannot get easy access to the data in the form of customized reports. Irrespective of distinctions between apprentice outcomes for joint and non-joint programs, without better use of its data, Labor is still not in a position to assess programs on their individual merits. Given the relatively limited number of staff available for field visits, by not using the program data it has, Labor misses opportunities to more efficiently use its staff. With regard to states with council-monitored apprenticeship programs, Labor’s oversight practices do not necessarily ensure that those states’ activities comply with federal standards for oversight because the Apprenticeship Office has only sporadically assessed their operations. Moreover, to the extent that the federal office does not provide useful feedback to the states when it does conduct reviews, states may lose opportunities to improve programs under their jurisdiction. Finally, because Labor does not seek much information beyond aggregate numbers from a majority of council-monitored states, policymakers lose an opportunity to gain perspective and insight for aligning workforce training with national needs, specifically for key occupations within construction that are likely to be faced with shortages of skilled workers in the near future. Recommendations We recommend that the Secretary of Labor take steps to (1) better utilize information in Labor’s database, such as indicators of program performance, for management oversight, particularly for apprenticeship programs in occupations with expected future labor shortages; (2) develop a cost-effective strategy for collecting data from council-monitored states; (3) conduct Labor’s reviews of apprenticeship activities in states that regulate their own programs on a regular basis to ensure that state activities are in accord with Labor’s requirements for recognition of apprenticeship programs; and (4) offer substantive feedback to states from its reviews. Agency Comments We provided a draft of this report to the Department of Labor for review and comment. Labor provided written comments on the draft report that are reproduced in appendix V. Labor concurred with our recommendations and has already taken steps to obtain data on apprenticeships from some council-monitored states and to regularly review activities in these states. Further, Labor stated it plans to use the data to better target the performance of the apprenticeship programs that OATELS directly registers and oversees, and to provide improved feedback to states that register and oversee their own apprenticeship programs. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Labor and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on 512-7215 or nilsens@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Scope and Methodology Our objectives were to determine (1) the extent to which the U.S. Department of Labor monitors the operations and outcomes of registered apprenticeship programs in the states where it has direct oversight, (2) its oversight activities for council-monitored states, and (3) outcomes for construction apprentices in programs sponsored jointly by employers and unions in relation to those sponsored by employers alone. To carry out these objectives, we surveyed OATELS officials in charge of apprenticeship programs in 23 federally monitored states and state apprenticeship directors in 28 states, including the District of Columbia, where state apprenticeship councils oversee programs. We used two surveys—one for federally- monitored states and one for council- monitored states—to obtain national information on OATELS’ monitoring and oversight activities. We focused only on apprentices in the civilian sector of the economy and did not include military or prison-based programs. We asked questions designed to determine the amount of resources devoted to oversight, the frequency of oversight activities, and the outcomes from these activities. The surveys were conducted using self-administered electronic questionnaires posted on the World Wide Web. We pretested our surveys with a total of five federally-monitored and council-monitored state officials to determine if the surveys were understandable and if the information was feasible to collect. We then refined the questionnaire as appropriate. We sent e-mail notifications to all federally-monitored and council-monitored state officials on January 5, 2005. We then sent each potential respondent a unique password and username by e-mail on January 13, 2005, to ensure that only members of the target population could participate in the appropriate survey. To encourage respondents to complete the surveys, we sent e-mail messages to prompt each nonrespondent approximately 1½ weeks after the initial e-mail message and a final e-mail reminder on February 7, 2005. We also called nonrespondents to encourage them to complete the survey. We closed the surveys on March 18, 2005. We received responses from all 23 federally-monitored and 27 of 28 council-monitored state officials including the District of Columbia. (See table 2.) Copies of the surveys are provided in appendices III and IV. To examine the outcomes for apprentices in the construction industry, we analyzed data from Labor’s RAIS database. In calculating completion rates, we constructed five cohorts based on when they enrolled in their programs; we had cohorts for fiscal years 1994, 1995, 1996, 1997, and 1998. We then considered the status of these cohorts 6 years after they enrolled to determine if they had completed, cancelled, or remained in training. Our analysis of wage data focused on data collected in fiscal year 2004, the first full year that Labor began collecting such information. We assessed the reliability of the RAIS database by reviewing relevant information on the database, interviewing relevant OATELS officials, and conducting our own testing of the database. This testing included examining the completeness of the data, performing data reliability checks, and assessing the internal controls of the data. Based on this information and our analysis, we determined that these data were sufficiently reliable for the purposes of our report. Because Labor’s RAIS database does not contain data from all states, we supplemented these data with data from 10 council-monitored states that do not report to this database. We selected these states based on the number of apprentices they had and whether their data were in an electronic format that would facilitate extracting and sending these data to us. We submitted a data request that asked for selected information on enrollment, completion, and wages for the 10 largest apprenticeship occupations to these states and received data from all of them. We determined that these data were reliable for our purposes. We did not combine these data with those from RAIS; we used them as a means of comparison. To learn more about the oversight of apprenticeship programs and their outcomes, we conducted site visits to four states—New York, California, Texas, and Washington. These states represented both federal and council- monitored states and had large numbers (from a high of about 52,000 to a low of 6,500) of construction apprentices. On these site visits, we interviewed relevant federal and state officials along with joint and non- joint program sponsors. We also toured facilities in two states where certain apprentices are trained. Throughout the engagement we interviewed relevant Labor officials and experts that have researched apprenticeship programs and reviewed relevant past reports and evaluations of these programs. We conducted our review from August 2004 through July 2005 in accordance with generally accepted government auditing standards. Appendix II: Completion Rates, Time Taken to Complete, and Wages for Construction Apprentices in Council-Monitored States Non- joint California reported no structural steel worker non-joint programs. New York reported no completers for pipe fitter, structural steel worker, painter, and operating engineer non-joint programs. Oregon reported no non-joint apprenticeship programs are registered in the state. Appendix III: Responses to Survey of Directors of Apprenticeships in Federally- Monitored States Q4. During FFY 2004, how many full-time equivalency (FTE) apprenticeship training representative, field, and other nonadministrative staff were employed by the state to monitor and oversee apprenticeship programs in your state? 22 Q6. In your opinion, would the following updates or modifications improve Registered Apprenticeship Information System’s (RAIS) usefulness to your state? 23 Q8. Did your state use WIA Governor’s 15% State Set-Aside funds to support new and/or established apprenticeship programs in FFY 2004? 23 Q9. Were WIA State Set-Aside funds used to support new and/or established apprenticeship programs in your state in FFY 2004 to do any of the following? Q11. For which of the following reasons did your state not use WIA Set-Aside Funds to support apprenticeship programs in FFY 2004? 17 17 Q13. Were WIA funding sources other than State Set-Aside Funds used in your state to support new and/or established apprenticeship programs in FFY 2004? Q14. Other than State Set-Aside Funds, which of the following WIA funding sources were used to support new and/or established apprenticeship programs in FFY 2004? Q16. Did your state establish linkages between WIA state unit and the state apprenticeship unit in FFY 2004 for any of the following purposes? 22 Q19. How often does your unit conduct formalized Quality Reviews of individual apprenticeship programs that address on-the-job training, related instruction, and/or program operations in your state? More than twice a year Q21. Approximately how many Quality Reviews did your unit conduct in FFY 2004? ( Click in the box and then enter up to a 4-digit whole number only. ) Little or no extent ( Please specify in Question 24. ) Little or no extent ( Please specify in Question 24. ) Q26. How often does your unit conduct formalized Equal Employment Opportunity (EEO) Reviews of individual apprenticeship programs? More than twice a year Q28. Approximately how many EEO Reviews did your unit conduct in FFY 2004? ( Click in the box and then enter up to a 4-digit whole number only. ) 23 Q29. To what extent, if at all, did your state find the FFY 2004 EEO Reviews useful for the following purposes? Little or no extent ( Please specify in Question 31. ) Q33. Did your state have procedures or policies for recording complaints filed in FFY 2004 that were elevated to the level of the state or regional OATELS office? Q34a2. Check if actual, estimate, or do not know or cannot estimate Q34b1. How many complaints concerned termination in FFY 2004? Q34b2. Check if actual, estimate, or do not know or cannot estimate Q34c1. How many complaints concerned discrimination in FFY 2004? Q34c2. Check if actual, estimate, or do not know or cannot estimate 18 Q34d1. How many complaints concerned wages in FFY 2004? Q34d2. Check if actual, estimate, or do not know or cannot estimate Q34e1. How many complaints concerned related instruction in FFY 2004? Q34e2. Check if actual, estimate, or do not know or cannot estimate 18 Q34f1. How many complaints concerned on-the-job training in FFY 2004? Q34f2. Check if actual, estimate, or do not know or cannot estimate Q34g1. How many complaints concerned other issues in FFY 2004? Appendix IV: Responses to Survey of Directors of Apprenticeships in Council- Monitored States Q5. Do you have a BAT agency in your state? 27 Q6. During state FY 2004, how many full-time equivalency (FTE) apprenticeship training staff were employed by the BAT agency in your state to monitor and oversee apprenticeship programs in your state? Q8. How often does your OATELS conduct SAC 29/29 Review (Review of Labor Standards for Registration of Apprenticeship Programs) in your state? More than twice a year Q10. To what extent did your state find OATELS’ most recent SAC 29/29 Review (Review of Labor Standards for Registration of Apprenticeship Programs) useful for the following purposes in your state? Q15. How often does OATELS conduct SAC 29/30 Review (Review of Equal Employment Opportunity in Apprenticeship and Training) in your state? More than twice a year Q17. To what extent, if at all, did your state find OATELS’ most recent SAC 29/30 Review (Equal Employment Opportunity in Apprenticeship and Training) useful for the following purposes? Q21. Does your state presently use OATELS’ Registered Apprenticeship Information System (RAIS) to register apprentices and to track apprentice and program information? 27 Q23. Does you state plan or intend to use RAIS to register apprentices and track apprenticeship and program information in the future ? Q26. Did your state use the WIA Governor’s 15% State Set-Aside funds to support new and/or established apprenticeship programs in state FY 2004? Q27. Were WIA State Set-Aside funds used to support new and/or established apprenticeship programs in your state in state FY 2004 to do any of the following? 5 Q29. For which of the following reasons did your state not use WIA Set-Aside Funds to support apprenticeship programs in state FY 2004? 22 Q31. Were WIA funding sources other than State Set-Aside Funds used in your state to support new and/or established apprenticeship programs in state FY 2004? Q32. Other than State Set-Aside Funds, which of the following WIA funding sources were used to support new and/or established apprenticeship programs in state FY 2004? 3 Q34. Did your state establish linkages between WIA and the state apprenticeship unit in state FY 2004 for any of the following purposes? Q37. Did your state have a mechanism for conducting formalized reviews of apprenticeship programs that address on-the-job training, related instruction, and/or program operations in state FY 2004? Q38. Which of the following components -- on-the-job training, related instruction, and/or program operations -- were included in these reviews? 25 Q40. How often does your state conduct formalized reviews of individual apprenticeship programs that address on-the-job training, related instruction, and/or program operations? Q42. Does your state have a mechanism for conducting formalized Equal Employment Opportunity (EEO) reviews of individual apprenticeship programs? Q43. How often does your state conduct formalized Equal Employment Opportunity (EEO) reviews of individual apprenticeship programs? More than twice a year Q45. Did your state have procedures or policies for recording complaints filed in state FY 2004 that were elevated to the level of state apprenticeship agencies? Q46a1. In your state, how many total complaints were referred to state officials in state FY 2004? Q46a2. Check if actual, estimate, or do not know or cannot estimate 22 Q46b1. How many complaints concerned termination in state FY 2004? Q46b2. Check if actual, estimate, or do not know or cannot estimate Q46c1. How many complaints concerned discrimination in state FY 2004? Q46c2. Check if actual, estimate, or do not know or cannot estimate Q46d1. How many complaints concerned wages in state FY 2004? Q46d2. Check if actual, estimate, or do not know or cannot estimate Q46e1. How many complaints concerned related instruction in state FY 2004? Q46e2. Check if actual, estimate, or do not know or cannot estimate Q46f1. How many complaints concerned on-the-job training in state FY 2004? Q46f2. Check if actual, estimate, or do not know or cannot estimate 22 Q46g1. How many complaints concerned other issues in state FY 2004? Appendix V: Comments from the Department of Labor Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Patrick DiBattista, Assistant Director, Scott Heacock, Linda W. Stokes, and Kathleen D. White managed all aspects of the assignment. The following individuals made significant contributions to this report: Susan Bernstein, Jessica Botsford, Richard Burkard, Cathy Hurley, and Jean McSween. Related GAO Products Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 2005. Public Community Colleges and Technical Schools: Most Schools Use Both Credit and Noncredit Programs for Workforce Development. GAO-05-4. Washington, D.C.: October 2004. Registered Apprenticeships: Labor Could Do More to Expand to Other Occupations. GAO-01-940. Washington, D.C.: September 2001. Youth Training. PEMD-94-32R. Washington, D.C.: September 1994. Apprenticeship Training: Administration, Use, and Equal Opportunity. HRD-92-43. Washington, D.C.: March 1992.
Between 2002 and 2012 nearly 850,000 jobs will open in the construction industry; experts predict that there will not be enough skilled workers to fill them. This has heightened concerns about program outcomes and program quality in the nation's apprenticeship system and the U.S. Department of Labor's oversight of it. GAO assessed (1) the extent to which Labor monitors registered apprenticeship programs in the states where it has direct oversight, (2) its oversight activities in states that do their own monitoring, and (3) the outcomes for construction apprentices in programs sponsored by employers and unions in relation to programs sponsored by employers alone. Labor's monitoring of programs it directly oversees has been limited. We found that in 2004 Labor reviewed only 4 percent of programs in the 23 states where it has direct oversight. According to federal program directors in those states, limited staff constrained their ability to do more reviews. Also, Labor has focused in recent years on registering new programs and recruiting apprentices. Although Labor collects much data about the programs it oversees, it has not employed its database to generate information indicative of program performance, such as completion rates, that might allow it to be more efficient in its oversight. Labor does not regularly review council-monitored states or collect data from them that would allow for a national picture of apprenticeships. Labor is responsible for conducting formal reviews of the 27 states and the District of Columbia that established apprenticeship councils to monitor their own apprenticeship programs; but, according to directors in these states, the reviews have been infrequent and not necessarily useful. While Labor collects only aggregate data on apprentices from these states, we identified 10 states with large numbers of apprentices that were willing and capable of providing GAO data on apprentices by occupation as well as some information on completion rates, completion times, and wages. Data in Labor's apprenticeship database and from council-monitored states show that completion rates and wages for construction apprentices in programs sponsored jointly by employers and unions were higher than those for programs sponsored by employers alone. We found that completion rates for apprentices in programs jointly sponsored by unions and employers were 47 percent on average compared with 30 percent in programs sponsored solely by employers. Completion rates declined under both types of sponsorship for the period we examined, but Labor, as part of its oversight, does not track reasons for noncompletion, making it difficult to determine what lies behind this trend.
Introduction According to the Office of Personnel Management’s (OPM) Guide to the Central Personnel Data File (CPDF), the CPDF is the federal government’s central personnel automated database that contains statistically accurate demographic information on about 1.9 million federal civilian employees. The CPDF’s primary objective is to provide a readily accessible database for meeting the workforce information needs of the White House, Congress, OPM, other federal agencies, researchers, and the public. A second objective is to relieve agencies that submit personnel data to the CPDF of the need to provide separate data or reports to meet a variety of reporting requirements. Data that agencies submit to the CPDF represent their official workforce statistics. OPM’s Office of Workforce Information (OWI) is responsible for accepting and entering data into the CPDF and processes the data using the Central Personnel Data System. OWI also prepares reports using CPDF data and distributes CPDF data to both OPM and non-OPM users. In order to safeguard the privacy of federal civilian employees as required under the Privacy Act of 1974, OPM must protect CPDF data from unauthorized disclosure. For example, at OPM access to agencies’ CPDF submissions is limited to OPM staff responsible for determining if the data meet OPM’s guidelines for acceptance into the CPDF. When disseminating CPDF data, OPM is to protect the privacy of individuals. For example, OPM is not to provide employees’ names, Social Security numbers, or birth dates to requesters or to make this information available to the federal agenciesthat are allowed to access the CPDF via OPM’s electronic User Simple and Efficient Retrieval (USER) system to retrieve personnel data to do their work. Background The CPDF contains personnel data for most of the executive branch departments and agencies as well as a few agencies in the legislative branch. Included are all of the cabinet departments (e.g., State, Treasury, Justice); the independent agencies (e.g., Environmental Protection Agency, Small Business Administration, National Aeronautics and Space Administration); commissions, councils, and boards (e.g., National Council on the Handicapped); and selected legislative branch agencies, such as the Government Printing Office. The CPDF does not contain employee data for the Central Intelligence Agency, Defense Intelligence Agency, the Board of Governors of the Federal Reserve System, National Security Agency, Office of the Vice President, Postal Rate Commission, Tennessee Valley Authority, U.S. Postal Service, or the White House Office. The CPDF also excludes from coverage non-U.S. citizens working for federal agencies in foreign countries; most nonappropriated fund personnel; commissioned officers in the Department of Commerce, Department of Health and Human Services (HHS), and the Environmental Protection Agency; and all employees of the judicial branch. The History of the CPDF The Civil Service Commission, OPM’s predecessor, decided it would install a type of central personnel database—the CPDF—in 1972 to provide a source that was capable of (1) satisfying minimum essential statistical data needs for central management agencies and the public; (2) meeting reporting requirements, such as periodic surveys of affirmative employment programs and semiannual turnover reports; and (3) alleviating the need for agencies to individually report similar information separately to requesters. The CPDF also expanded and replaced the Federal Personnel Statistics Program Sample File, which was established in 1962. The File contained a continuous work history on each federal employee whose Social Security account number ended in the digit “5,” a population that constituted a 10-percent sample of the federal workforce. How the CPDF Operates OPM builds six files from agency-submitted data. These are the longitudinal history (a record of personnel actions arranged by date within Social Security number), organizational component (a listing of the codes used by each agency to identify its various work units, e.g., regions, divisions, branches); personnel office identifier (contains the mailing address and telephone number for personnel offices that report to the CPDF); name (a cross-reference listing of names, Social Security numbers, accession dates, and applicable separation dates of employees reported to the CPDF); status; and dynamics files. Of the six, this report focuses on the status and dynamics files. They are the source of the demographic information used by OWI to write reports and to respond to data requests by users of CPDF data. The status file consists of data elements describing each employee as of the date of the file. Agencies are required to submit these files on a quarterly basis, with the submissions due at OPM no later than the 22nd of the month following the end of the quarter (e.g., input for the quarter ending December 31 must be submitted by January 22). All of the employees covered by the CPDF are to be included in each file. The data elements include information on the type of work; the employee’s pay; and personal information, such as gender and birth date. The dynamics file consists of data elements describing each personnel action taken by an agency during the period covered by the file. Personnel actions are the official records of employees’ careers, such as hires, promotions, reassignments, pay changes, resignations, and retirements. The file includes information about the action taken, the agency/subelement, the position, pay, and the individual employee. The normal reporting period is a calendar month but may end as of the last full biweekly pay period of the month. Submissions are due at OPM as soon as possible following completion of agency processing but no later than 22 days following the end of a monthly reporting period. As of February 1998, the CPDF consisted of 95 separate data elements. Of this number, 68 are to be reported by agencies in their monthly and quarterly dynamics and status file submissions. OPM relies on agencies to ensure that the data they submit are timely, accurate, complete, and edited in accordance with OPM standards. OPM provides agencies with guidance, the Guide to the CPDF, which says agencies are to test the data they provide to the CPDF to ensure that the data are accurate and complete. To help agencies ensure the quality of their data, OPM provides them with the CPDF Edit Manual, which prescribes the data values to which agencies’ data are to conform before they are submitted. To test the values of their data, agencies are to use OPM’s CPDF edits. These edits are computer instructions that are to check the validity of individual data elements as well as the proper relationship of values among associated data elements. For example, the edit for the sex data element checks that the character used to define the data element is either “M” for male or “F” for female; the edit identifies other characters as errors. OPM expects agencies to incorporate these CPDF edits into their internal personnel data systems. These edits constitute the minimum level of quality control OPM expects the agencies to employ. Agencies have the option of incorporating additional quality controls, such as testing a sample of the data for accuracy before submitting it, in addition to applying the CPDF edits. The CPDF edits cannot detect all types of errors. For example, an edit for the sex data element would not be able to detect if the character “M” was incorrectly used to identify a female employee. According to OWI officials, although they provide agencies with the edits, errors still occur in submissions, which OPM strives to identify through OWI’s quality review process. The officials also said that errors in pay-related data elements often occur at the beginning of the year because agencies make their beginning-of-the-year submissions before they install edits that reflect annual cost-of-living pay increases. The Guide to the CPDF also informs agencies about what data elements should be included in their CPDF data submissions and the frequency and timing of the submissions. As mentioned earlier, frequency and timing requirements differ for the status and dynamics data files. After agencies submit the personnel data, OPM puts the submissions through an acceptance process before the data can be entered into the CPDF. This process includes putting the data through the same CPDF edits the agencies were to use before submitting the data as well as other analyses. OWI manages the process. Its staff are to provide agencies with feedback on their submissions, requesting, as needed, corrections to submissions that fail edit checks or other analyses and preventing data that are not within the acceptable range of data values from being entered into the CPDF. OWI is to make the final decision about what data are entered into the CPDF. At the time of our review, the Central Personnel Data System was operated by OPM’s Office of Information Technology (OIT) for OWI. The CPDF Quality Control team that monitored agencies’ data submissions was part of OIT. However, operation of the System was transferred to OPM’s Retirement and Insurance Service in 1997, and the Quality Control team were reassigned to OWI. OPM Has Authority to Request Agency Data for the CPDF OPM may require agencies under 5 C.F.R. section 7.2 to report “in such manner and at such times as OPM may prescribe, such personnel information as it may request.” On the basis of this authority, OPM is able to direct agencies to submit selected personnel data to the CPDF. However, although the OPM Director can request data, she cannot ensure that agencies provide accurate information in a timely manner. The responsibility for providing timely, accurate information remains with the head of the agency providing the information. OPM officials rely on federal agencies to voluntarily comply with CPDF guidelines and correct problem submissions. Objectives, Scope, and Methodology For this review, we had three objectives: (1) determine the extent to which selected CPDF data elements are accurate, including the data elements used by OPM’s Office of the Actuaries for estimating the government’s liability for future payments of federal retirement programs; (2) determine whether selected users of CPDF data believed CPDF products met their needs, including whether the products were current, accurate, and complete and whether the cautions OPM provided to them on the limitations associated with using the data were sufficient for them to present the CPDF data correctly; and (3) determine whether OPM has documented changes to the System and verified the System’s acceptance of those changes, as recommended in applicable federal guidance, and whether the System would implement CPDF edits as intended. Objective 1 To determine the extent to which selected CPDF data elements are accurate, including the data elements used by OPM’s Office of the Actuaries for estimating the government’s liability for future payments of federal retirement programs, we (1) designed and sent questionnaires to a random sample of federal employees to have them verify some of their CPDF data and (2) compared CPDF data with information in randomly selected official personnel folders and in other agency records at selected personnel offices. Table 1.1 presents a list of the CPDF data elements we used in our employee questionnaire and comparison of CPDF data with information in official personnel folders. We used two approaches, i.e., a questionnaire (see app. V) and a comparison of data in official personnel folders and agency records to CPDF data, to measure the accuracy of CPDF data. We sent the questionnaire to a sample of employees, because OPM studies show that official personnel files or agency records may be in error. We compared the results of both approaches to develop our findings. We also reviewed past OPM accuracy measurements, examined CPDF data for missing and unusable information, and interviewed an official of OPM’s Office of the Actuaries to discuss the accuracy of the data elements the Office uses for estimating the government’s liability for future retirement payments. These steps are more fully described in the following sections. Questionnaire As part of our evaluation of the accuracy of CPDF data, we selected a stratified random sample of 565 federal employees and attempted to send each a questionnaire containing 20 data elements about themselves obtained from the CPDF (see app. V for a copy of our questionnaire). The data elements that we included in the questionnaire were among those we most frequently use to do our work, those OPM analysts use most frequently in preparing CPDF reports, and those used by OPM’s Office of the Actuaries to estimate the government’s liability for future payments of federal retirement programs. We selected those data elements that we believed employees would be able to verify. We included in each individual’s questionnaire data elements from the September 1996 CPDF status file about that individual. The elements consisted of (1) Social Security number, (2) employing agency/subelement, (3) adjusted basic pay (including locality pay), (4) month and year of birth, (5) duty station, (6) pay plan, (7) grade, (8) handicap, (9) occupation, (10) race or national origin, (11) service computation date, (12) sex, (13) veterans preference, (14) veterans status, (15) work schedule, (16) education level, (17) rating of record, (18) retirement plan, (19) annuitant indicator, and (20) employee name. We asked the respondents to verify the accuracy of each data element, indicating whether it was correct or incorrect as of September 30, 1996. When a respondent indicated that a data element was incorrect, we asked the respondent to enter the correct information. We pretested the questionnaire to assure ourselves that respondents could interpret the questions correctly and could provide the information requested. We modified question wording and questionnaire format on the basis of what we learned from five pretests. The random sample of 565 was drawn from 7 strata to represent a study population of 1,905,787 non-Federal Bureau of Investigation (FBI) federal employees whose names were contained in the CPDF database as of September 30, 1996. Random samples of 30 selections each were drawn from 6 smaller strata, each of which comprised a single personnel office. These six personnel offices were among the eight largest personnel offices in the federal government. These offices were the Social Security Administration (SSA), Baltimore, MD; Department of the Army, Fort Benning, GA; U.S. Customs Service, Washington, D.C.; National Institutes of Health, Bethesda, MD; Department of State, Washington, D.C.; and Department of the Navy, Pensacola, FL. We selected these personnel offices because of their size and because our work at the offices could then be representative of a relatively large portion of records contained in the CPDF. The 6 personnel offices were among 1,425 in the government and served over 8 percent of the employees whose data were contained in the CPDF as of September 30, 1996. We used the selections from these six personnel offices for both the employee questionnaire and a review of official personnel folders. The remainder of the sample—385 selections—was randomly drawn to represent the remaining stratum of 1,746,592 employees from all other personnel offices. The total sample of 565 was designed to ensure that it approximately mirrored the population distribution with respect to type of appointment (career or noncareer), work schedule (full-time or non-full-time), type of service (competitive or excepted), and location (stationed in or outside the United States). Because the CPDF does not contain mailing addresses for employees, we mailed most of our questionnaires to personnel officers whom were identified in the CPDF as serving the employees in our sample. In all, 562 of the 565 sampled employees were covered by 280 personnel officers. We were not able to identify the personnel officers for three of the sampled employees. We asked the personnel officers to whom we sent questionnaires to forward them to the sampled employees. In addition, we asked them to provide us with the direct mailing address of each sampled employee so that we would be able to mail follow-up questionnaires directly to sampled employees who did not return a questionnaire to us within 45 days. We also asked the personnel officers to furnish us with reasons why any of the questionnaires could not be forwarded to the sampled employees. After an initial and a follow-up mailing, we received 407 usable questionnaires out of 565, for a 72 percent response rate. Table 1.2 presents a breakdown of the number of sampled federal employees responding to our questionnaire as well as the various reasons why some sampled employees did not respond. We edited the questionnaires received from respondents to identify data elements marked as incorrect. In cases where a respondent indicated that a data element from the CPDF was incorrect, the editor then made an effort to determine if the correction entered onto the questionnaire by the respondent was logical. For example, a number of respondents indicated that the annual pay amount shown on the questionnaire was incorrect. However, in researching the “correct” amount entered by the respondent, it was determined that the amount entered was his or her current annual pay, not the annual pay as of September 30, 1996, as indicated in the question. In these cases, the response was changed from incorrect to correct by the editor. The 407 returned questionnaires from the 7 strata were weighted to represent the population of 1,905,787 federal employees for all results presented in this report. Sampling errors have been calculated to take into account the different weights assigned to each stratum. Unless otherwise noted, the 95 percent confidence intervals around all reported results are plus or minus 5 percentage points or less. In addition to sampling errors, the practical difficulties of administering any questionnaire may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted by the questionnaire respondents could introduce unwanted variability in the questionnaire’s results. We took steps in the development of the questionnaire, the data collection, and the data editing and analysis to minimize nonsampling errors. Comparison of Official Personnel Folders and Agency Records With CPDF Data We also compared data contained in official personnel folders and other agency records with data in the CPDF for the same period at the six selected personnel offices. For each of the 6 personnel offices we selected, we chose 30 employees at random from the September 1996 CPDF status file. The employees were those who were reported by the CPDF as being served by the six respective personnel offices. At each of the 6 personnel offices, we asked for official personnel folders for the 30 employees. We also asked for information from the personnel offices’ automated files on ratings, handicap, and race or national origin because such information is not necessarily contained in personnel folders. We then selected 20 employees at random from those whose official personnel folders were available. We over-sampled by 10 employees in our initial sample for each personnel office because we anticipated that some folders would be unavailable because of employee departures or other reasons. At SSA, we reviewed official personnel folders and other agency records for only 13 employees because the official personnel folders for 17 of the 30 employees we chose at random were located in offices throughout the country and not in a central location as we initially expected. In total, we reviewed folders and other agency records for 113 employees for the 6 personnel offices. For each of the 113 employees in our sample, we obtained information from the September 1996 CPDF status and dynamics files. The information we obtained consisted of the 20 data elements we used for our questionnaire and the data elements that we most frequently use to do our work, including key status and dynamics data elements. The eight data elements that were in addition to the data elements used for the questionnaire were current appointment authority, effective date of action, legal authority code, nature of action code, pay rate determinant, personnel office identifier, position occupied, and tenure. We reviewed a total of 28 data elements: 23 data elements common to both the status and dynamics files, 1 element found only in the status file, 3 elements found only in the dynamics file, and employee name (see table 1.1 for the CPDF data elements we reviewed and their file locations). For each employee, we compared the CPDF data with relevant documents, such as Standard Forms 50 (notification of personnel action) and employment applications, in official personnel folders. We also compared the CPDF data with automated files on those employees’ ratings, handicap, and race or national origin. We discussed any mismatches we found with personnel officials in an attempt to determine how differences can occur between the CPDF and agency documentation. Past OPM Accuracy Measurements OPM conducts periodic measurements of CPDF accuracy by comparing data in the official personnel folders of separated employees with data in the CPDF. We reviewed the six measurements of CPDF accuracy OPM did from April 1984 to July 1996 and compared the results of our evaluation of CPDF accuracy with the results of OPM’s last two measurements, which were issued in January 1992 and July 1996. Data Used by OPM’s Office of the Actuaries To determine if the CPDF data used by OPM’s Office of the Actuaries to estimate the government’s liability for future retirement payments are sufficiently accurate for use by the Office, we first met with the actuary responsible for calculating this liability to determine the CPDF data elements used in the estimate. After our analysis of the employee questionnaire and our comparison of personnel folders and other agency records to CPDF data, we again interviewed the actuary to discuss the results of our two approaches and the impact of errors on the estimate. Additional Methodological Characteristics The results of our employee questionnaire are generalizable to the universe of 1,905,787 employees included in the CPDF’s September 1996 status file. Table 2.1 shows the generalized results as a percentage of records in the September 1996 status file. The results of our comparison of employees’ official personnel folders and other agency records to CPDF data are not generalizable to the CPDF as a whole, although they may be indicative of the personnel offices at which we performed our work. The CPDF data elements measured for accuracy generally were among those identified by OPM as key to the accuracy of its recurring reports. We cannot determine from the work we did the accuracy of data elements we did not review. We did not independently verify educational levels reported by employees or any of the responses of employees. Our accuracy findings are for CPDF data in the September 30, 1996, status file and the fiscal year 1996 dynamics file. The accuracy might differ for previous and future CPDF files, especially when agency procedures or information processing technology change. Our accuracy measurement was not designed to evaluate the reliability of CPDF data from individual agencies or specific subsets of employees, such as those on leave without pay. OPM reports on the percentage of data elements in agency submissions that do not pass standard CPDF edits show considerable variation across agencies. Objective 2 To determine whether selected users of CPDF data believed CPDF products met their needs including whether the products were current, accurate, and complete and whether the cautions OPM provided to them on the limitations associated with using the data were sufficient for them to present the CPDF data correctly, we designed, with advice from OPM, a CPDF customer questionnaire (see app. VI for a copy of our questionnaire). We mailed the questionnaires to 247 individuals identified by OPM’s OWI as representing all the requesters of CPDF products in fiscal year 1996 who obtained data directly from OPM. We mailed the customer questionnaires in May 1997 to the return addresses on letters in OWI’s fiscal year 1996 correspondence files that had requested CPDF products and to recipients of recurring CPDF-based reports in 1996. We followed up our initial mailing with a second one in June and a third one in July. We did not include in our analysis any questionnaires received after August 6, 1997. After August 6, 1997, we made follow-up telephone calls to all nonrespondents and determined that 40 of the original 247 individuals we sent the questionnaire to were either not CPDF users or had left their organizations. Of the remaining 207 individuals who were CPDF users, 140 (or 68 percent) responded to the mail questionnaire, and an additional 21 responded to an abbreviated version of the mail questionnaire we used in follow-up telephone calls to nonrespondents. The combined response rate for the mail-out questionnaire and the telephone follow-up was 78 percent. After we received the questionnaires from the respondents, we edited them for completeness and consistency. All of the data from the questionnaires were double-keyed and verified during data entry. In addition, a random sample of these data was verified back to the source questionnaires. Additional Methodological Characteristics The results of our customer questionnaire are not generalizable to the universe of users of CPDF data and products for 1996 because we could not define the universe of users necessary to draw a representative sample. The distribution of CPDF products, such as recurring reports, is not controlled. These products are available through various outlets, such as libraries, that do not track customers. Therefore, we relied on OWI to identify those customers who corresponded with it in 1996 to request CPDF data and sent our questionnaire to this defined but nonrepresentative subset of the 1996 universe of CPDF users. Objective 3 To determine whether OPM has documented changes to the Central Personnel Data System and verified the System’s acceptance of those changes, as recommended in applicable federal guidance, and whether the System would implement CPDF edits as intended, we first reviewed federal guidance on managing automated information systems. To determine the extent to which OPM’s OIT followed the guidance in managing the development of the System, we conducted interviews at OIT, which was responsible for operating the System, and OWI, which is the System’s owner, about their basis for determining the System’s reliability. From these officials, we requested available documentation relating to modifications and upgrades of software used by the System to process CPDF data and documentation relating to verification that these modifications and upgrades worked as planned. We also reviewed available documentation on OPM’s current Information Technology Strategy to determine whether it includes procedures for managing the System in the future. To determine whether the System would implement CPDF edits OPM uses to screen the 68 data elements reported by agencies to OPM as intended, we reviewed 18 of the 63 validity and all 700 of the call-relational edits the System uses to screen agencies’ data submissions. Additional Methodological Characteristics We judgmentally selected only the 18 validity edits OWI uses to screen the data elements it considers critical; therefore, the findings of our review of these 18 edits cannot be generalized to all 63 validity edits. Because we did not actually put test data through the System or otherwise test the reliability of the System’s hardware and software under operating conditions, we cannot verify the reliability of the System. We did not assess the likelihood that the CPDF would be Year 2000 compliant by December 31, 1999. We conducted our work between November 1996 and June 1998, in accordance with generally accepted government auditing standards. The employee CPDF data verification questionnaire and CPDF customer survey were administered between May 1997 and September 1997; thus, the data are as of those dates. We requested comments on a draft of this report from the Director of OPM. OPM provided written comments on a draft of this report (see app. VII) that are discussed at the end of chapters 2, 3, and 4. CPDF Data Reviewed Appear to Be Mostly Accurate in the Aggregate The accuracy of the data the CPDF contains depends on the accuracy of the data that agencies submit. Errors in those data can occur at various stages of the personnel process, such as when agency personnel clerks enter data for newly hired employees or when they code information on personnel actions (e.g., performance appraisals). OPM does not have an official accuracy standard for agencies’ submissions. On a periodic basis, however, OPM draws a governmentwide sample of CPDF records and measures CPDF data accuracy by comparing selected data in former federal employees’ official personnel folders to data in the CPDF for the same period. OPM generally makes the results of its measurements of CPDF accuracy available to OPM users of CPDF data but not to non-OPM users. In spite of the important uses of CPDF data, no independent evaluation of the accuracy of the data has been done. Our work showed that most of the CPDF data elements we reviewed were 99 percent or more accurate on a governmentwide basis. The rating of record and education level data elements had the highest error rates, at about 5 and 16 percent for rating of record, and 23 and 27 percent for education level based on our questionnaire and comparison, respectively. Our overall findings are broadly similar to what OPM found when it measured historical accuracyin 1996 by comparing 1994 data in former employees’ official personnel folders with the data in the CPDF. We shared the results of our work with the actuary responsible for calculating the federal government’s liability for future retirement payments to retired federal employees and their survivors, and he said that the CPDF data elements were sufficiently accurate for making this estimate. OPM Measures Historical Accuracy of CPDF, but Does Not Report Results of Its Accuracy Measurements to Non-OPM Users OPM periodically measures the accuracy of selected data that are in the CPDF. As we said earlier, OPM relies on agency data passing CPDF edits to eliminate errors that would result in inaccurate data being entered in the CPDF. For example, the edits are to identify a salary amount that is too high for a particular pay plan or grade. However, the edits are not able to identify an error in salary that is within the range of that pay plan or grade. Thus, inaccurate data can get into the CPDF. To measure the historical accuracy of CPDF data, OPM periodically compares certain data found in a sample of former federal employees’ official personnel folders to data in the CPDF for the same period. From April 1984 to July 1996, OPM conducted six such measurements. OPM analysts used a sample of former employees and compared certain data elements in their official personnel folders to information in the CPDF’s status and dynamics files. For example, the latest measurement, which was released in 1996 for fiscal year 1994 data, used a sample of 135 former employees and compared 35 status file and 40 dynamics file data elements to information in the official personnel folders. An error was defined as a value found in the CPDF that was not the same as that found in the employee’s official personnel folder. OPM officials told us—and OPM’s accuracy surveys state—that the surveys were designed to measure the accuracy of governmentwide data only and not the accuracy of data from individual agencies. OPM generally makes the results of its measurements of CPDF data accuracy available to CPDF data users within OPM but not to non-OPM users. In five of the historical accuracy measurements, OPM found that most CPDF data were generally accurate, and in most cases the selected data elements matched the corresponding official personnel folder entries 99 percent or more of the time. However, OPM did not make that statement for its December 1990 measurement of 1988 CPDF data. Instead, it advised OPM users of CPDF data to review the results of the accuracy measurement and determine for themselves whether the data were sufficiently accurate for their use. OPM officials said that OPM does not routinely inform non-OPM users of the results of its measurements of historical accuracy. OPM has not promulgated a standard for the accuracy of CPDF data. To our knowledge, no federal agency has promulgated accuracy standards that are generally applicable to federal databases. In general, the level of accuracy for data must be balanced against what the data are to be used for and the cost of obtaining a greater level of accuracy. Most CPDF Data Tested Were Accurate and Agreed With Agencies’ Personnel Records To measure the accuracy of the CPDF, we (1) sent a questionnaire to a random sample of federal employees to gather information about the accuracy of 20 of the 68 CPDF data elements reported by agencies and (2) compared data for 28 data elements in the CPDF with the data contained in the official personnel folders and other agency records for 113 randomly selected employees at 6 of the largest federal personnel offices. We found that most CPDF data elements we tested were accurate and agreed with information in employees’ official personnel folders and other agency personnel records. Although our methodology differed from the one OPM uses in its measurements of historical accuracy, the results of our review were broadly similar to OPM’s results. Questionnaire Results and Comparison of Selected CPDF Data to Employee Records Showed Most Data Were Accurate and Agreed With Agencies’ Personnel Records To determine the accuracy of 20 selected CPDF data elements, we sent a questionnaire to a random sample of federal employees that was representative of federal employees governmentwide (see ch. 1 for a description of our sampling methodology). We asked them to review information about themselves that we obtained from the September 1996 CPDF. The data elements we asked about were those about which we believed employees would be most familiar, including employee name,birth date, and Social Security number. The results of our questionnaire showed that 14 of the 20 data elements, or 70 percent, matched data in the CPDF in 99 percent or more of the cases (see table 2.1). There were no inaccuracies for seven of these data elements and the other seven data elements had error rates of less than 1 percent. The remaining six data elements had error rates greater than 1 percent (see table 2.1). The two most error-prone data elements were education level and rating of record. Education level had a 26.7 percent error rate and rating of record had a 4.7 percent error rate. The education level data element is intended to reflect the highest education level that a federal employee achieved. The rating of record data element indicates an employee’s most recent rating or performance appraisal. The results of our employee questionnaire are generalizable to the universe of 1,905,787 employees included in the CPDF’s September 1996 status file. Table 2.1 shows the generalized results as a percentage of records in the September 1996 status file. We also compared data in employees’ personnel folders or other agency records with data in the CPDF for 113 randomly selected employees at 6 of the largest federal personnel offices (see the Objectives, Scope, and Methodology section in ch. 1 for a discussion of our selection process). For this comparison, we reviewed a total of 28 data elements: 23 data elements common to both the status and dynamics files, 1 element found only in the status file, 3 elements found only in the dynamics file, and the employee name data element found in the CPDF name file. (See table 1.1 in the Objectives, Scope, and Methodology section in ch. 1 for the CPDF data elements we reviewed and their file locations.) In our review of official personnel folders and agency records, we found no inconsistencies among the 23 data elements we included in our comparison that were common to both the status and dynamics files. For example, if the status file data element showed an erroneous education level for a given employee, the dynamics file element showed the same erroneous code. Our review of official personnel folders showed that personnel actions reflected in the CPDF dynamics file appeared to be generally complete. There were no inaccuracies for 12 of the data elements. For another five data elements, our comparison showed error rates of less than 1 percent. The remaining nine data elements had error rates greater than 1 percent. For the legal authority code data element, we could not determine the error rate because some employees had no transactions for fiscal year 1996. Table 2.2 shows the results of our comparison. Concerning the most error-prone data elements, our review of employees’ official personnel folders and agency records showed results similar to those of our questionnaire—education level and rating of record were the most error-prone data elements. (See app. III for a more detailed discussion of the data elements that contained the highest rates of error.) However, the results of our comparison between the data in the official personnel folders and the CPDF differ somewhat from those of our questionnaire. For example, the results of the questionnaire showed education level to have a 26.7 percent error rate and rating of record to have a 4.7 percent error rate. The results of the comparison showed education level to have a 23.0 percent error rate and rating of record to have a 15.9 percent error rate. Although we did not try to determine the reason for these differences, two reasons appear most likely. First, the results of the questionnaire are generalizable governmentwide, although the results of the comparison are not because the sample of the comparison is not generalizable. Second, the information in the employees’ official personnel folders might not be current. In particular, employees may not have informed their personnel offices of additional education completed, so this information may not be in the official personnel folder. Thus, the information in the official personnel folder might match the CPDF, but neither would be current. The Results of Our Review Were Broadly Consistent With Those of OPM’s Historical Accuracy Measurements In its measurements of historical accuracy of CPDF data, OPM has reported results broadly consistent with ours. That is, OPM has found that most data elements it reviewed were 99 percent or more accurate but has found high error rates for rating of record and education level. Table 2.3 groups by percent of errors the error rates identified by our two methods for measuring CPDF status and dynamics file data accuracy and OPM’s measurement of the historical accuracy of fiscal year 1994 CPDF status file data. The table shows that between the two methods we used to measure CPDF data accuracy (although variation existed in the accuracy of some data elements) at least 63 percent of CPDF data elements were 99 percent or more accurate. Although OPM’s and our results were broadly consistent, there are important differences between OPM’s methodology and ours. First, we sent our questionnaire to a generalizable sample of current federal employees and reviewed a random sample of official personnel files of current federal employees. In contrast, OPM reviewed centrally located records of former employees. Second, OPM’s methodology in comparing CPDF data with those in employees’ official personnel folders differed from ours. We often relied on agency records (e.g., records maintained separately from official personnel folders for race, national origin, and handicap) in cases where data were not in official personnel folders, but OPM generally limited its review to documents that were in personnel folders. Third, the way we determined errors differed in part from OPM’s. OPM did not determine if the official personnel folder data element itself was correct, but we did so by researching available agency personnel records. The Accuracy of CPDF Data Varied by Agency Our review of employees’ official personnel folders and other agency records was intended to evaluate CPDF accuracy in general, not to compare CPDF data accuracy among individual agencies. Such a review would have required a much larger sample to represent each agency. But during our review, we did find circumstances that demonstrated how accuracy varied by agency and why. For example, although the five other agencies we reviewed were routinely providing information on employee performance ratings to the CPDF, SSA had not updated rating information in the CPDF for over 2 years at the time of our review. SSA officials told us this lapse occurred because temporary procedures that had been established to correct SSA’s difficulty in providing appraisal data to HHS proved to be cumbersome; as a result, SSA did not provide its appraisal data to HHS for HHS to submit the data to the CPDF for 1995. According to SSA officials, SSA continued to capture these data in its human resource management information system, but HHS did not ask for the data, and SSA was not aware that it was to report them to the CPDF. The importance of a data element to an agency can affect the level of effort that the agency gives to ensuring the data element’s accuracy. For example, some personnelists in the offices we visited said that the accuracy of education level information was “of little concern” to them. In contrast, two other personnel offices reported taking steps to improve the accuracy of this information. Officials in one of these offices (the Pensacola Naval Air Station) told us they had updated the education level information on their employees as part of an overall records review. An official in the other office (State) told us that promotions for certain of their employees are based, in part, on education levels. Therefore, the official said that employees are asked to review such information maintained by the agency and report needed changes. We also observed in previous work that agency-specific CPDF data could be inaccurate. For example, in 1997 the House Committee on International Relations asked us to examine a discrepancy between the number (17) of Schedule C political appointees reported to Congress by the Agency for International Development (AID) and the number (0) that appeared in the CPDF for the period January 19, 1993, through November 14, 1995. Through our analysis of the CPDF data, we determined that AID used the wrong legal authority when coding the appointing authority for these individuals. As a result, the information in the CPDF (0) did not correctly identify any of the 17 individuals as political appointees. Inaccuracies in specific agencies’ CPDF data, such as SSA not submitting current rating of record data for 2 years and AID using the wrong legal authority code for Schedule C political appointees, can distort users’ analyses, findings, and conclusions and result in OPM’s reporting on federal agencies that misinforms policymakers and the public. These examples also show that errors in agency-specific data may go unnoticed for several years and that the accuracy of a particular data element can vary from year to year for a particular agency. OPM officials told us that they believe that the periodic accuracy measurements that OPM does are a good indicator of problematic data elements governmentwide. For example, OPM’s measurement of historical accuracy for fiscal year 1994 discusses why errors occurred and gives error rates for status and dynamics file data elements governmentwide. However, as we said earlier, OPM does not provide the results of these measurements to non-OPM users of CPDF data. Therefore, non-OPM users of CPDF data are most likely not aware of the findings of OPM’s accuracy measurements. In addition, OPM officials said that their periodic accuracy measurements are not useful for identifying errors in CPDF data elements at individual agencies. OPM officials said they sometimes become aware of agency-specific inaccuracies in the CPDF when non-OPM users of the data, such as us or the agencies affected, contact OPM about the inaccuracies. For example, OPM said that after it discovered that AID Schedule C appointees were not identified in the CPDF, it began working with AID to improve the future reporting on political appointees. Awareness of inaccuracies in specific data elements and variation in data accuracy among agencies is important because OPM and non-OPM users rely on CPDF data to monitor and report on individual agencies’ demographics, compliance with government policies, or other characteristics. For example: OPM’s Office of Merit Systems Oversight and Effectiveness uses CPDF data to monitor and report on individual agencies’ compliance with selected Merit Systems Principles set out in title 5 of the United States Code;the National Performance Review used CPDF data in a 1993 report on Transforming Organizational Structures to compare the numbers of federal personnel by occupation; the Equal Employment Opportunity Commission used CPDF data in its fiscal year 1991 report to the President and Congress on affirmative employment programs for minorities and women and for hiring, placement, and advancement of people with disabilities in the federal government; and we use the data in some of our reports to Congress. According to these officials, OPM’s current approach for measuring CPDF data accuracy is not designed to include representative samples for individual agencies, and such a sample would be significantly larger than the 135 official personnel folders OPM examined to do its latest measurement for fiscal year 1994 data. OPM officials recognize that the results of rigorous measurements of CPDF data accuracy, i.e., measurements designed to test the accuracy of individual agencies’ data, could help users of CPDF data determine if the data are sufficiently accurate for their purposes. However, OPM officials believe the cost of doing such measurements would be prohibitive and would not guarantee that users would consider the measurements when working with CPDF data or that agencies would use the results of the measurements to improve the accuracy of their CPDF data submissions. OPM’s Office of the Actuaries Reported That CPDF Data Are Sufficiently Accurate for Estimating the Government’s Liability for Future Retirement Payments OPM’s Office of the Actuaries uses CPDF data to help estimate the federal government’s liability for future payments of federal retirement programs. According to the actuary responsible for calculating the federal government’s liability for future retirement payments to federal employees and their survivors, the office uses CPDF data on adjusted basic pay, sex, birth date, retirement plan, and service computation date in calculating the estimate of this liability. We discussed with the actuary the error rates we found for these data elements both as measured in our employee questionnaire and in comparison to official personnel folders and records. Except for adjusted base pay, which was about 94-percent accurate in our nongeneralizable comparison of official personnel folders and CPDF data, we found all of these data to be 99 percent or more accurate. We shared these results with the actuary, and he told us that the CPDF data elements were sufficiently accurate for making the liability estimate. The actuary also told us that erroneous national economic assumptions were much more likely to affect his estimate than inaccuracies in the CPDF data. For instance, the actuary said that slight variances in estimated future interest rates or rates of return on investment could have a significant impact on the government’s estimated liability for future payments. Furthermore, the actuary said that the CPDF is not the only source of information for certain information the office uses for its estimate. For example, the actuary told us that he makes independent calculations of salaries by using data on contributions to pension plans. In addition, OPM received an unqualified opinion on its retirement program financial statements for fiscal year 1997. Conclusion Most of the 28 data elements we reviewed were 99 percent or more accurate in the aggregate. A minority of data elements we reviewed, especially education level and rating of record, was much less accurate. OPM has found broadly similar results in its accuracy measurements but has not informed non-OPM users of CPDF data of these results even though the lower level of accuracy for some data elements could affect the validity of analyses relying on those data elements. Further, the accuracy levels that both OPM and we have found are generalizable only governmentwide. Anecdotal evidence from this review and our prior work illustrates that the accuracy of CPDF data elements can vary significantly among agencies. Nevertheless, OPM and non-OPM analysts rely on CPDF data to monitor and report on individual agencies’ demographics, compliance with government policies, and other characteristics. OPM officials said that gauging the accuracy of individual data elements by agency would require a significantly larger measurement sample and thus increase its measurement costs. Informing users of CPDF data of the governmentwide accuracy results and a specific caution that individual agencies’ results may vary significantly could nevertheless be useful. This would allow analysts and those using CPDF products to make better informed judgments before using agency-specific CPDF data and perhaps to seek information to corroborate the CPDF data. Recommendation to the Director of OPM We recommend that the Director of OPM make the results of OPM’s measurements of historical accuracy available to all users. To make this information available OPM could post the results of its accuracy measurements on its Internet web site including cautionary language indicating that the accuracy of CPDF data elements may vary by agency. OPM could also inform users of the availability of this information whenever it distributes CPDF data or reports. Agency Comments and Our Evaluation In a letter dated September 11, 1998, (see app. VII), the OPM Director said our findings are consistent with OPM’s internal quality measures. In particular, the OPM Director cited our draft report’s findings that CPDF data, including the data used by OPM’s Office of the Actuaries to estimate the government’s liability for future retirement payments, were accurate. The OPM Director also said that although our findings were positive, she believed many of the report’s headings tended to obscure rather than clarify the findings. In addition, she said that the Results in Brief discussion of CPDF accuracy standards and error rates in education level data is so limited that it presents only our view of CPDF limitations. According to the OPM Director, for “complete and accurate information that provides a more balanced rationale for CPDF specifications, one must look beyond the Results in Brief” to the body of the report. We believe the view presented in the Results in Brief is balanced. For example, in the first paragraph, we report that about two-thirds of the selected CPDF data elements it reviewed were at least 99-percent accurate. We also disagree that the report’s headings tend to obscure rather than clarify the findings. The report’s title, chapter titles, and main captions note the positive findings of our review. We believe, as the OPM Director acknowledged, that our report clearly states that most of the CPDF data we reviewed were accurate. The OPM Director did not specifically refer to our recommendation that she make the results of OPM’s historical measurements of the CPDF’s accuracy available to all users. However, she said that OPM will make available appropriate explanatory material to all CPDF users. As stated in this chapter, we believe that this explanatory material should include the accuracy measurements. USERs Generally Reported CPDF Products Met Their Needs, but Further Awareness of Cautions on CPDF Data Could Affect Use of the Data We used a questionnaire to determine the extent to which selected CPDF users believed (1) the CPDF data they used met their needs, including whether the products were current, accurate, and complete; and (2) they received sufficient cautions about the limitations of CPDF data to use or present the CPDF data correctly. OPM officials identified 247 CPDF users as representing all of the requesters of CPDF data products who corresponded directly with OPM in 1996. We surveyed those 247, and 40 said they did not use CPDF products. Of the remaining 207, 161 responded to our questionnaire as users of the CPDF. The results of our CPDF customer questionnaire showed that the majority of CPDF users responding believed that CPDF products met their needs, including being sufficiently current, accurate, and complete. However, 29 of the 71 CPDF users said knowing about cautions they were not made aware of would have affected the way they used or presented CPDF data. OPM officials said, and respondents’ answers to our questionnaire indicated, that the extent to which OPM provided users cautions about the general limitations of the CPDF varied. OPM officials said they were considering creating a CPDF web site that would allow OPM to make CPDF data more widely available and to “bundle” or link specific cautions on limitations associated with specific sets of data. USERs Generally Reported That CPDF Data Met Their Needs Including Being Current, Accurate, and Complete OPM distributes a variety of CPDF-based products, including data extracts that consist of selected data elements, e.g., “service computation date” or “duty station,” which are provided on tape or diskette to users; recurring reports, such as the Demographic Profile of the Federal Workforce; ad hoc reports containing specific information from the CPDF, such as results of matching CPDF data with other data; and the User Simple and Efficient Retrieval (USER) system, which is an information retrieval system that provides electronic access to the CPDF’s status and dynamics files. The majority of the respondents to our questionnaire reported that the data in the CPDF products they used met their needs, including being current, accurate, and complete. For example, when asked about the extent to which CPDF products that they used over the past 2 years (i.e., data extracts, recurring reports, ad hoc reports, and the USER system) met their needs, depending on the type of product, 67 to 81 percent of respondents rated CPDF products as meeting their needs to a great or very great extent. When asked about the extent that these products were current enough to meet their needs, the majority of CPDF users responding to this question reported that the CPDF was, to a great or very great extent, current enough to meet their needs. Seventy to 73 percent of the users who answered this question rated the data products we asked about as current enough for their needs to a great or very great extent. When asked about the extent to which they believed the CPDF products that they used over the past 2 years were accurate, the majority (65 to 87 percent) of users responding to this question rated the products we asked about as accurate to a great or very great extent. Similarly, the majority (71 to 89 percent) of the users responding to our question about the completeness of CPDF data said they believe the products listed were complete to a great or very great extent. Of those users of CPDF products who reported that specific products met their needs to a great or very great extent, a large majority also reported that those products were accurate and complete. In addition to the data products that we asked about, 15 respondents to our questionnaire reported they used the Installation Level Data Retrieval System (ILDRS)—a database system that uses CPDF data to provide a “snap shot” of a federal agency’s personnel. When asked about the extent to which ILDRS was current enough to meet their needs over the past 2 years, unlike the response we got from most users about the currency of CPDF products, only 4 of these 15 respondents rated it as being current enough to meet their needs to a great or very great extent. Eight of the 15 respondents rated ILDRS as being accurate to a great or very great extent, and 9 of the 15 rated ILDRS as being complete to a great or very great extent. Most CPDF USERs Said Cautions OPM Provides on Data Limitations Were Sufficient, but Some Said Further Awareness of Cautions Could Affect Use of Data OWI does not provide users of CPDF products with a uniform set of cautions about the limitations of the data elements contained in the CPDF. The extent of the cautions OPM provides about the limitations of CPDF data to users of CPDF-based products varies because, according to OPM officials, the cautions are tailored to the CPDF product being requested. Users responding to our questionnaire demonstrated a wide range of awareness of caution statements about the CPDF data’s limitations. The majority of users responding to our questionnaire reported that they were aware of the limitations of the data they received and that the caution statements on limitations provided by OPM were sufficient for them to correctly use the data. OPM Does Not Disclose to Users All the Cautions About the CPDF’s Limitations Although OPM’s CPDF-based governmentwide and ad hoc reports contained some cautions on limitations, none of the reports we reviewed disclosed all of the cautions on the CPDF. We observed that CPDF products, such as ad hoc reports, that OPM prepares to respond to requests for specific information do not fully disclose all 28 cautions about the limitations of the CPDF that OPM officials identified for us. For example, OPM’s response to a state’s request for CPDF data that were to be used in a data match to identify federal employees by selected data elements, such as pay grade, who graduated from state education and training programs cautioned the requester that the CPDF contains records of personnel only in executive branch agencies. OPM did not warn the requester that OPM’s quality assurance procedures cannot detect agency miscoding of certain data elements, such as pay grade (e.g., submission of grade 11 when the grade is actually 12). In contrast, the recurring reports that are widely distributed and that contain governmentwide statistics, such as OPM’s Biennial Report of Employment by Geographic Area, contained quality measurements of the data in the reports and error rates (i.e., estimated percentage of data elements that failed edit checks) for each of the data elements reported. OWI analysts routinely monitor and report to agencies submitting data about the quality of their own submissions, that is, the degree to which their data submissions fall within OWI’s acceptable range of data values, or edit standards. This information is also made available within OPM and to certain non-OPM users. For example, information on the percentage of data elements not passing CPDF edits and the quality of the CPDF status and dynamics files is currently available through OPM’s USER system. According to OPM, we, the Equal Employment Opportunity Commission, the Merit Systems Protection Board, the Department of Agriculture, Department of Labor, Environmental Protection Agency, National Guard, National Security Agency, Congressional Budget Office, and the Office of Management and Budget were trained and given access to this system by them. OPM officials reported that they do not know to what extent these agencies use the quality reports available through the USER system. Although OPM does not make information about the quality of individual agencies’ CPDF submissions directly available to nonfederal and most federal users, it bases some caution statements to users about the limitations of CPDF data on this information. For example, in its Demographic Profile of the Federal Workforce as of September 30, 1996, OPM informed users that about 0.4 percent of the total CPDF records available for the report were rejected because they failed edits on key data elements. OPM also cautions users in correspondence responding to requests for information and in its recurring CPDF-based reports, such as OPM’s Biennial Report of Employment by Geographic Area, about certain general limitations of the data, such as the exclusion of certain agencies’ employees from the CPDF’s population coverage. However, OPM does not caution users about other limitations, such as that OPM may change submitted values that are missing or known to be in error. Most CPDF Users Said CPDF Products Met Their Needs, but Some Said Further Awareness of Cautions on CPDF Data Could Affect Use of Data In our questionnaire to CPDF customers, we asked them to indicate how many of the 28 cautions about the CPDF OPM made them aware of. The CPDF users responding to our questionnaire showed a wide range of awareness of the cautions. For example, more than 95 percent of those answering our question about CPDF cautions said they were cautioned by OPM that certain agencies are exempt from reporting to the CPDF. However, only about 34 percent of those answering the question said they were made aware that OPM may change submitted values that are missing or known to be in error by matching records to older files or making values consistent with statistical assumptions. According to OPM officials, these changes rarely happen; and, when they do, they affect only one or two agencies once every four quarterly status files. Overall, from 72 to 86 percent of the users reported that the caution statements on limitations provided by OPM were sufficient for them to correctly use or present the data contained in the various CPDF products they used to a great or very great extent. However, 29 of the 71 CPDF users said knowing about cautions they were not made aware of would have affected the way they used or presented CPDF data. Of the 28 caution statements about limitations of the CPDF listed on our questionnaire, the 5 that respondents were least aware of were the following: (1) a small number (0.2 percent) of employees have more than 1 record in a CPDF status file; (2) the FBI does not report duty station location for employees outside of the District of Columbia; (3) OPM may change submitted values that are missing or known to be in error by matching records to older files or making values consistent with statistical assumptions; (4) there is no CPDF standard format for submitting employee names; and (5) CPDF status files are generally considered to reflect employment at the end of the quarter, but they might actually reflect employment at the end of the pay period just prior to the end of the quarter. OWI officials reported that OPM provides information about the specific limitations of a data product to requesters but does not provide information about other limitations, such as the list of 28 caution statements about CPDF data, to all requesters. OPM officials said that making caution statements about CPDF data limitations more widely available might be useful to some users of the data. However, OWI officials believe this alone would not prevent the possible misinterpretation of a specific set of data by a third-party user, i.e., someone who does not receive CPDF data directly from OPM reports or OPM. Because OPM officials are not always aware of the intended use of data requested by users, these officials may not be aware of which of the 28 caution statements would be most beneficial to those users. For example, if a user intended to derive the average education level of the employees of a particular agency but only requested status file data as of a particular date from OPM, OPM officials might not provide the user with the caution statement that some data were collected at the time of appointment, e.g., education level data, but not routinely updated. Therefore, the average education level derived for the agency would not be current and most likely be understated. OWI officials reported they have been considering creating a CPDF web site that would allow OPM to make CPDF data more widely available and allow OPM to bundle specific caution statements on limitations with the sets of data. Most CPDF USERs Surveyed Rated the Overall Quality of CPDF Products as Excellent or Very Good OPM sends customer feedback questionnaires to its CPDF users to determine if it is meeting their needs and to solicit suggestions for improvement. We reviewed 149 OPM customer feedback questionnaires for the period covering March 27, 1990, through February 28, 1994, and determined that 140 of the 149 (94 percent) of the CPDF users responding rated the overall quality of the CPDF products they received as very good or excellent. The majority (from about 72 to 84 percent) of the CPDF users responding to our questionnaire also rated the overall quality of the specific CPDF products they used as very good or excellent. Conclusions Most of the users of CPDF data we surveyed reported that they believed that the data in those CPDF-based products they used met their needs, including being current, accurate, and complete. The majority of users we sent questionnaires to reported they had received sufficient cautions about the CPDF’s limitations to use or present the data correctly. However, although OPM highlighted cautions about CPDF data that are most likely to be applicable to the interests of a particular requester of those data, it did not make all 28 caution statements available to each of those requesters. Some users reported that knowing about cautions they were not made aware of would have affected the way they used or presented CPDF data. In addition, users who obtain CPDF data regularly without a specific request to OPM may not be cautioned about the limitations associated with using the data. Recommendation to the Director of OPM We recommend that the Director of OPM ensure that OPM make all 28 caution statements about limitations associated with CPDF data available to all users. In addition, it may be useful for OPM to continue its practice of highlighting cautions on the data limitations of the CPDF that are most likely to be applicable to the interests of a particular requester of CPDF data. To make this information available to all users, OPM could (1) post, on its Internet web site, a complete listing of the 28 caution statements about limitations associated with CPDF data, (2) apprise all recipients of CPDF data of the availability of the caution statements, and (3) implement its proposal to bundle specific cautions on limitations associated with specific sets of data. Agency Comments and Our Evaluation In a letter dated September 11, 1998, (see app. VII), the OPM Director said our findings were consistent with OPM’s internal quality measures. The OPM Director cited our draft report’s findings that most of the users of CPDF data we surveyed rated the overall quality of the data excellent to very good and believed they received explanatory material that enabled them to use the data correctly. The OPM Director also said that although our findings were positive, she believed the Results in Brief section was too skimpy and that many of the report’s headings tended to obscure rather than clarify the findings. We believe the view presented in the Results in Brief is balanced. We also disagree that the report’s headings tend to obscure rather than clarify the findings. The report’s title, chapter titles, and main captions note the positive findings of our review. We believe, as the OPM Director acknowledged, that our report clearly states that most CPDF users’ needs were met. The OPM Director did not specifically refer to our recommendation that she make all 28 caution statements about limitations associated with CPDF data available to all users. However, she said that OPM will make available appropriate explanatory material to all CPDF users. As stated in this chapter, we believe that this explanatory material should include all 28 caution statements about limitations associated with CPDF data. In addition, the OPM Director identified additional agencies that have access to OPM’s USER system, which we added to the report where appropriate. System Software Development Not Documented According to Applicable Federal Guidance, but Software Appears to Implement Edits as Intended From 1976 to 1995 applicable federal guidance recommended that agencies use a structured approach for operating and maintaining automated information systems, such as the Central Personnel Data System. The guidance suggested that agencies document the life cycle of an automated information system from its initiation through installation and operation. Although the guidance was issued before OPM’s major redesign of the System software in 1986, OPM’s OIT did not document changes that were made to the System or have independent testing done to ensure that changes to the software would perform as intended. OIT officials said that to their knowledge the System has not had problems processing data reliably and that the System’s owner, OPM’s OWI, concurred. Our review of 718 of the 763 computer instructions used by the CPDF showed that the System uses instructions that should implement CPDF edits as intended. OIT officials said that for OPM to accomplish its future information technology (IT) goals it will have to follow a structured approach for computer application development. Toward this end, OPM has adopted a software development goal that would require such an approach no later than fiscal year 2002. OPM Did Not Document an Upgrade of the System’s Software as Recommended in Federal Guidance From 1976 to 1995, federal guidance issued by the National Bureau of Standards and other federal agencies said that sufficient planning and documentation are needed for cost-effective operation and maintenance of information systems. This guidance described the need for organizations to adopt a structured, or System Development Life Cycle (SDLC), approach. An SDLC approach requires organizations to document the phases of the development life cycle for automated information systems and their software applications, including any changes that are made to the systems or their software. Although federal guidance recommending that agencies follow best practices for automated information systems’ best practices were issued before OPM’s major redesign of the System’s software in 1986, OIT did not document changes that were made to the System. OIT officials said that to their knowledge there was no effect on the System from their not having used the SDLC approach because they believe the System was still reliable without it. Federal Guidance Recommended Using a Structured Approach to System’s Software Development From 1976 to 1995, federal guidance existed to assist agencies as they developed computer software applications and made changes in their automated information systems from initiation through operation. For example, on February 15, 1976, the Department of Commerce’s National Bureau of Standards issued the Federal Information Processing Standards (FIPS) Publication 38, which provided basic guidance for the preparation of 10 document types that agencies were to use in the development of computer software. FIPS Publication 64, which was issued on August 1, 1979, provided guidance for determining the content and extent of documentation needed for the initiation phase of the software life cycle. In 1995 the Secretary of Commerce approved the withdrawal of nine such guidelines, including FIPS Publications 38 and 64. However, agencies that find these guidelines useful may continue to use them. The National Bureau of Standards’ 1988 Guide to Auditing for Controls and Security: A System Development Life Cycle Approach, was to be used as an audit program for auditing automated information systems under development. It included many guidelines that were published from 1976 through 1984 that described the SDLC approach and its requirements, including documentation. This guide also referenced other federal sources that required documentation, including federal information resource management reports and OMB Circular A-130. The federal government does not follow a single SDLC approach, but an SDLC approach generally includes the following phases: (1) initiation (the recognition of a problem and the identification of a need); (2) definition (the specification of functional requirements and the start of detailed planning); (3) system design (specification of the problem solution); (4) programming and training (the start of testing, evaluation, certification, and installation of programs); (5) evaluation and acceptance (the integration and testing of the system or software); and (6) installation and operation (the implementation and operation of the system or software, the budgeting for it, and the controlling of all changes and the maintenance and modification of the system during its life). SDLC documentation is important because it provides a basis for (1) systematically making decisions while moving through a system’s life-cycle phases and establishing a baseline for future changes to the system and (2) auditing systems that are under development. According to federal guidance, software acceptance testing, like other testing of the automated information system, must be documented carefully, with traceability of test cases to the system requirements and the acceptance criteria. Without acceptance testing, changes to an automated information system cannot be verified as working as intended. Ensuring an information system’s reliability is not the only reason for following an SDLC approach. The National Bureau of Standards’ Guide to Auditing for Controls and Security: A System Development Life Cycle Approach states that if agencies use a structured approach to systems development, the probability increases for a well-defined life cycle and compliance to such a cycle. According to the Guide, an unstructured approach leads to free-form system development that may result in serious omissions. Without a structured approach to software applications development, no assurance exists that adequate testing, verification, validation, and certification will be done; resources will be appropriately expended; the anticipated return on investment will be achieved; or user requirements will be met. In addition, without documentation, the history of system changes can be lost if staff changes occur, thus making future system modifications or problem corrections more time-consuming and costly. During the evaluation and acceptance phase, the computer instructions that have been written or modified undergo testing to verify that they will perform according to user specifications. Although federal guidance said that some changes to the SDLC may be appropriate “if the subject to be addressed is a major modification to a system rather than the development of a new one,” it also said that “the need to continually assess the user’s needs (validation) and to ensure the conceptual integrity of the design (verification) are not arguable.” Thus, evaluation and acceptance testing is a phase that no agency should leave out of an SDLC. As we have described in guidance for the Year 2000 computing challenge, acceptance testing should be done by an independent reviewer. An independent review helps to ensure that internal controls and security are adequate to produce consistently reliable results. OPM Did Not Document Upgrade of the System’s Software as Recommended in Federal Guidance According to OPM officials, since the System’s development in 1972, it has gone through only one major software upgrade, which was done in conjunction with the replacement of the System’s hardware. According to an OIT official, in 1985, OPM replaced its existing Honeywell computer with an IBM computer and converted CPDF application programs to run on the new hardware. He also reported that at about the same time, OPM decided to upgrade CPDF capabilities by procuring several commercial software packages as well as designing customized software. According to OIT managers, the software upgrade was done in 1986 to improve the timeliness and accuracy of the CPDF because it was not working efficiently. The OIT managers who were responsible for the System at that time told us that OPM did not document the phases of this major system software modification as recommended in applicable federal guidance under an SDLC approach. Other OIT officials also told us that OPM did not follow an SDLC approach for these 1986 CPDF changes or have documentation that would show that acceptance testing was done. In addition, the testing that was done was not done by an independent reviewer. OPM officials said that because of time constraints, OIT staff who designed the software modifications also did the acceptance testing and did not document it. Although OIT did not follow an SDLC approach and did not have documentation to show that the 1986 software upgrade passed acceptance tests or that subsequent modifications to the System’s software applications worked as intended, its managers said that they believe the System is reliable. They said that they base their beliefs on the fact that OPM’s OWI, the System’s owner, has not complained that the System is not meeting its needs. The System Appears to Implement CPDF Data Edits Reliably Because OIT did not document software upgrades and modifications to the System, we could not review this type of documentation as a basis for independently evaluating the extent to which the System is operating as intended. As an alternative, of the 763 total edits (700 call-relational and 63 validity) that the system used at the time we did our work, we reviewed the computer instructions written to implement the 700 (470 dynamic file and 230 status file) call-relational edits and 18 of the 63 validity edits that together check the key status and dynamics data fields. This approach allowed us to indirectly determine if the System would reliably implement CPDF data edits, the computer instructions that are to check the validity of individual data elements. Putting test data through the System or otherwise testing the reliability of the System’s hardware and software under operating conditions would have allowed us to directly test the reliability of the System. However, we did not attempt to directly test the System’s reliability. OPM officials raised a concern about the possible adverse effects of putting test data through the System. They were concerned that putting test data through the System could disrupt its production schedule and introduce “bad” data that could have unforeseeable consequences on the System’s operations. Because of the lack of any indications that routine System operations to process agencies’ data submissions had caused data errors and the concern raised by OPM, we decided to limit our test of the System’s reliability to a review of the computer instructions the System uses to implement edits. Through our review, we determined that the computer instructions the System uses would implement as intended the selected CPDF call-relational edits and the validity edits used to identify data inconsistencies in the data elements submitted by agencies. We found only one true error. The computer instructions for a dynamics file call-relational edit that is 1 of 20 subprograms used to edit the prior basic pay data element was written in 1995 but was not applied to agencies’ dynamics file submissions. CPDF programmers attributed this error to a mistake and oversight on their part and not to a lack of documentation. OPM Has Implicitly Committed to Adopt an SDLC Approach In January 1997, OPM initiated a project to develop and implement an Information Technology Architecture Vision, which describes the hardware, software, network, and systems management components of the technical infrastructure required to support OPM business applications and data. This project was initiated in response to various federal government initiatives intended to help ensure that government agencies achieve their missions by changing management practices concerning IT investment and operational decisions. The first phase of this project was the development of an OPM IT architecture vision, which is intended to provide the framework within which OPM can make IT decisions. OPM published its IT architecture vision in December 1997 and has as one of its components a description of the technology infrastructure that will be needed to support OPM’s data and application needs. Under this technology infrastructure component, OPM is to adopt standards for application development and plans to provide training to staff with the goal of reaching a specified software development level of process maturity as described in the Capability Maturity ModelSM (CMM ). was developed by the Software Engineering Institute, which is a federally funded research and development center operated by Carnegie Mellon University. It has as a major purpose guiding process improvement efforts in a software organization. CMM levels—(1) initial, (2) repeatable, (3) defined, (4) managed, and (5) optimizing—to represent evolutionary plateaus on the road to a high level of software process capability. Each maturity level after the first defines several key process areas—groups of related software practices—all of which must be satisfied for an organization to attain that level. An OIT official reported that OPM’s IT is at level 1 and has a goal under its IT architecture vision of reaching level 2 or higher by fiscal year 2002. recommends that an organization use specific software development practices, tools, and methodologies. It does not stipulate how the organization must perform software development or management activities. For level 2 and higher, CMM requires an agency to define and document an SDLC approach that is to be used in the development, modification, and management of automated information systems and their software applications. Therefore, by adopting level 2 as a goal, OPM also is committing to follow an SDLC approach by fiscal year 2002. Because an applies to the development, modification, and management of all significant systems, once OPM has adopted an SDLC approach, it would need to make changes to the CPDF that would conform to an SDLC approach. Successfully adopting an SDLC approach would be a significant change for OPM because it said in its IT architecture vision that OPM’s application development style has been situational, with few common approaches to system development. The lack of an SDLC was a repeat material weakness reported in independent audits of the financial statements for fiscal years 1996 and 1997 of the retirement program that was administered by OPM’s Retirement and Insurance Service. OIT officials told us that they recognize the importance of having an SDLC approach for accomplishing the applications development goals OPM’s IT architecture vision and in its strategic plan for fiscal years 1997 to 2002. In the strategic plan, OPM includes a strategy for ensuring that OPM’s mission-critical computer systems, of which the CPDF is one, are Year 2000 compliant in time to ensure that services to customers are not interrupted. This strategy includes detailed tracking of progress on renovation and testing of each IT system and validating and testing that software changes are working as intended. These steps generally conform to SDLC requirements. Other than efforts for making its information systems Year 2000 compliant, it is not clear whether OPM would follow an SDLC approach when modifying any other systems, including the CPDF, before fiscal year 2002. Neither the IT architecture vision nor the strategic plan specifically identifies when OPM plans to adopt an agencywide SDLC approach. Conclusions OPM has not followed an SDLC approach to software development that includes documenting the phases of such development as recommended in applicable federal guidance. OPM also has not documented the testing of changes to software to verify that those changes worked as intended or had such changes tested by an independent reviewer. Nevertheless, although we did not directly test the System’s hardware and software under operating conditions, our review of the computer instructions the System uses to implement CPDF call-relational and validity edits shows that the System should implement these edits reliably. OPM has adopted a goal of achieving at least a CMM level 2 by 2002, and doing so would require OPM to define and document an agencywide SDLC approach. OPM’s current significant modification to CPDF and other mission-critical systems to be Year 2000 compliant is following a structured approach like an SDLC, but it is unclear when OPM might adopt an SDLC approach for other future system changes. Documentation of system changes in part helps agencies make any future system modifications more quickly and cost effectively, and independent review of system or software changes helps ensure that they will work as intended. Therefore, following these procedures for any changes to the CPDF before OPM adopts an agencywide SDLC could be beneficial. Recommendation to the Director of OPM We recommend that the Director of OPM document any changes to the CPDF before OPM adopts an agencywide SDLC approach as specified in CMM guidelines and that such changes be independently verified to ensure that they will work as intended. Agency Comments and Our Evaluation In a letter dated September 11, 1998, (see app. VII), the OPM Director said our findings are consistent with OPM’s internal quality measures. The OPM Director cited our draft report’s findings that the CPDF edit programs should function well. The OPM Director also said that although our findings were positive, she believed many of the report’s headings tended to obscure rather than clarify the findings. According to the OPM Director, for “complete and accurate information that provides a more balanced rationale for CPDF specifications, one must look beyond the Results in Brief” to the body of the report. We believe the view presented in the Results in Brief is balanced. We also disagree that the report’s headings tend to obscure rather than clarify the findings. The report’s title, chapter titles, and main captions note the positive findings of our review. We believe, as the OPM Director acknowledged, that our report clearly states that the System’s edit programs should operate as intended. The OPM Director agrees with our recommendation that OPM document all future computer system and software changes and perform independent verification that the changes function as intended. She said that OPM is committed to adopting a formal SDLC methodology and is currently in the process of implementing interim measures to ensure that the System is fully documented and continues to function reliably. As an enclosure to her comments, the Director provided OPM’s plans for implementing an SDLC methodology.
Pursuant to a congressional request, GAO reviewed the Office of Personnel Management's (OPM) database of federal civilian employees, the Central Personnel Data File (CPDF), focusing on: (1) the extent to which selected CPDF data elements are accurate, including the data elements used by OPM's Office of the Actuaries for estimating the government's liability for future payments of federal retirement programs; (2) whether selected users of CPDF data believed that CPDF products met their needs, including whether the products were current, accurate, and complete and whether the cautions OPM provided to them on the limitations associated with using the data were sufficient for them to present the CPDF data correctly; and (3) whether OPM has documented changes to the Central Personnel Data System and verified the System's acceptance of those changes, as recommended in applicable federal guidance, and whether the System would implement CPDF edits as intended. GAO noted that: (1) OPM does not have an official standard for the desired accuracy of CPDF data elements; (2) on a periodic basis, however, OPM measures CPDF data accuracy by comparing certain data found in a sample of former federal employees' official personnel folders to data in the CPDF for the same period; (3) OPM generally makes the results of its measurements of CPDF data accuracy available to users of CPDF data within OPM but not to non-OPM users; (4) although the accuracy of the CPDF data GAO reviewed varied by data element, about two-thirds of the selected CPDF data elements GAO reviewed were 99-percent or more accurate; (5) GAO surveyed all the requesters of CPDF products that OPM identified as obtaining data directly from OPM for fiscal year (FY) 1996; (6) most of these CPDF users reported that CPDF products met their needs, including the data being current, accurate, and complete; (7) the majority of surveyed users reported that they believed that the caution statements OPM provided were sufficient for them to use CPDF data correctly; (8) however, OPM did not provide these users of CPDF data with all 28 cautions that explain how CPDF limitations could affect how they present or use CPDF data; (9) although applicable federal guidance recommended that agencies document the life cycle of an automated information system from its initiation through installation and operation, OPM did not document changes that it made to the System in 1986 when it did a major redesign of the System's software; (10) OPM also did not have documentation to show that acceptance testing of those changes was done and, according to OPM, the testing was not done by an independent reviewer; (11) however, OPM officials said that to their knowledge the System has not had problems processing data reliably; (12) GAO's review of the computer instructions for most CPDF edits used by the System showed that the System uses instructions that should implement the CPDF edits reviewed as intended; (13) OPM officials acknowledged that for OPM to accomplish its future information technology goals it will have to follow an approach that includes documenting the development, modification, and management of its automated information systems and their software applications; and (14) OPM has committed to adopting this approach by no later than FY 2002.
Background Under the Clean Air Act, EPA has set air quality standards for six principal pollutants—the so-called “criteria pollutants”—to protect public health. These are carbon monoxide, lead, nitrogen dioxide, particulate matter, and sulfur dioxide, as well as ground-level ozone. The latter is not directly emitted by stationary sources but is formed by the airborne reaction of heat and sunlight with nitrogen oxides and volatile organic compounds, which are emitted by the sources. For the criteria pollutants, EPA sets limits—called “national ambient air quality standards”—on the acceptable levels in the ambient air anywhere in the United States. These limits are intended to ensure that all Americans have the same basic health and environmental protections. In addition to the six principal pollutants, EPA regulates 188 hazardous pollutants known as “air toxics.” People exposed to toxic air pollution— which can be highly localized near industrial sources—have an increased chance of getting cancer and experiencing other serious health effects. Under the Clean Air Act, EPA specifies a limit on emissions of air toxics that is based on the achievable control technology. EPA is also to evaluate the residual risk to human health after the adoption of the technology standards and, if necessary, establish more stringent health-based standards. Large Stationary Sources Emit Significant Amounts of Regulated Pollutants Industrial facilities emit over 100 million tons of pollutants into the air of the United States each year. Many of these sources are large stationary sources, such as chemical manufacturers and electric utilities. For the purpose of the Clean Air Act, these facilities fall into two categories: major sources and minor sources. Generally, under the Clean Air Act, major sources are facilities that annually emit or have the potential to annually emit (1) 10 or more tons of any one toxic air pollutant, (2) 25 tons of any combination of toxic air pollutants, or (3) 100 or more tons of any of the six principal pollutants. Minor sources are those facilities that emit below these thresholds. Some minor sources are referred to as “synthetic minors.” While not defined in the act, synthetic minors are facilities that, according to EPA, have the potential to emit pollutants at the same levels as major sources but choose to limit their operations, thus reducing their emissions to levels below those of major sources. According to an EPA air quality official, as of January 2001, about 19,880 major sources had already received or expected to receive title V permits. According to EPA officials, the agency does not maintain information on the total number of minor sources. As shown in figure 1, large stationary sources accounted for varying portions of the nation’s emissions of certain regulated air pollutants in 1998. They accounted for 86 percent of the sulfur dioxide emissions and 38 percent of nitrogen oxide emissions in 1998. On the other hand, large stationary sources emitted only 4 percent of particulate matter, 6 percent of carbon monoxide, and 12 percent of volatile organic compound emissions in 1998. In addition, major sources accounted for nearly 24 percent of the emissions of toxic air pollutants in 1996 (the most recent year for which EPA has data). In monitoring air quality, regulators look at the levels of pollutant emissions—the amounts being emitted into the air—as well as measured concentrations—the levels detected in ambient air. According to EPA’s data, national emissions of carbon monoxide, lead, sulfur dioxide, and volatile organic compounds decreased from 1990 through 1998, while emissions of nitrogen oxides and particulate matter increased. According to data from EPA’s national network of air quality monitors, the concentrations of all six pollutants, on an aggregate national basis, decreased from 1990 through 1999. These improvements ranged from a 4- percent decrease in ground-level ozone to a 60-percent decrease in lead concentrations. Despite these improvements, in 1999, approximately 23 percent of Americans (62 million) lived in areas that did not meet federal ambient air quality standards for at least one of the six principal pollutants. EPA and States Share Responsibility for Administering Clean Air Program Subject to EPA’s oversight, state agencies are responsible for administering air quality programs. The Clean Air Act Amendments of 1990 provide for these agencies to issue title V permits to major stationary sources within their jurisdictions. According to an EPA official who manages the title V permit program, all 113 state and local agencies have federally approved title V permit programs. Title V permits contain emissions-related, record-keeping, and monitoring requirements. Emissions-related requirements can include limitations on emissions per year or per hour, or on total production levels. In addition, some facilities have no limits on the amount of total pollution they emit but, instead, have efficiency standards that require them to remove a certain proportion of the pollution they generate by using specific pollution control equipment. For example, a facility might be required to operate control equipment that removes 90 percent of the pollution generated by a particular process or production line. Record-keeping and monitoring requirements specify activities that facilities must perform to demonstrate compliance with their title V or state permit. For example, a permit may require a facility to maintain information on its operating conditions, such as the amount of raw materials used or outputs produced. According to EPA officials responsible for overseeing states’ permit programs, these agencies finance their title V permit programs (including permits for and inspections of major sources) with fees paid by regulated facilities. An EPA official responsible for overseeing states’ title V permit programs stated that, as of December 2000, the national average fee paid by major sources was $28 per ton of pollution emitted. According to EPA officials responsible for developing emissions inventories and air quality policies, in addition to serving as the basis for fees, the emissions reports are used in developing emissions inventories. These officials also explained that the inventories inform regulatory decision-making at the local, state, and federal levels. For example, regulators use them to develop control strategies and to establish permit requirements. Routine Inspections Generally Found Compliance, but Intensive Investigations Found Widespread Noncompliance Each year, EPA and states perform thousands of inspections at large facilities to monitor their compliance with requirements contained in title V and state permits. However, because of the limitations of routine inspections and suspected noncompliance, EPA initiated intensive investigations within four industries. These intensive investigations found indications of significant noncompliance with provisions of the Clean Air Act that routine inspections do not generally address. While inspections and intensive investigations are used to monitor compliance, title V permits hold company officials at major sources accountable for compliance by requiring those officials to submit at least once a year a statement certifying as to their compliance status with all applicable clean air requirements. Major sources must also report every 6 months on all deviations from the permit’s requirements. A company official must attest to the truth, accuracy, and completeness of the statements. Routine Inspections May Not Detect Emissions Violations Routine inspections, which focus on compliance with the permits, address emissions-related, record-keeping, and monitoring requirements. According to EPA’s guidance, a routine inspection must include the following three components: Observing visible emissions. This inspection technique can indicate whether the measures to control certain pollutants emitted from a facility are being properly operated and maintained. Observing and recording data on control devices and operating conditions. This enables inspectors to compare observed operating conditions (such as temperature or production levels) with those specified in a facility’s permit. Reviewing records and log books on the facility’s operations. These records provide information on a facility’s operating conditions during times when inspectors are not present at the facility. Nationally, EPA and state officials conducted routine and other inspections at 17,812 large sources in 1999, according to EPA’s data. Of these, 88 percent (15,618 facilities) were found to be in compliance with their permit, while 12 percent (2,194 facilities) did not fully comply. EPA’s data for 1998 show similar national results: 89 percent (15,805 facilities) were in compliance, and 11 percent (1,997 facilities) were not. EPA does not maintain data on the extent to which facilities found in noncompliance directly violated emissions-related requirements rather than record- keeping or monitoring requirements. An EPA Air Enforcement Division official told us that administrative and record-keeping violations sometimes conceal emissions-related violations. For example, the official said that if a facility has a limit on its emissions per unit of production but fails to maintain production records, an inspector might not be able to determine if excess emissions had occurred. In such a case, the inspector might cite the facility for noncompliance with a record-keeping provision. The six inspections we observed (at a chemical manufacturer, a diesel engine manufacturer, a fiberboard manufacturer, a municipal waste incinerator, an absorbent material manufacturer, and a steel mini mill) illustrate the limitations of routine inspections. The permits for the six facilities contained various limits on emissions and production levels. All six facilities had production-based and hourly emissions limits for at least some of their production lines. The permits for five of the six facilities imposed aggregate annual limits for some or all of the pollutants that the facilities emit. For example, the permit for the diesel engine manufacturer specified an annual limit on benzene (a toxic air pollutant) emissions from on-site generators and diesel engine test booths, while the permit for the absorbent material manufacturer had limits on hourly emissions, total annual emissions, and annual production levels. In contrast, the fiberboard manufacturer had no annual emissions limits. The inspectors whom we accompanied checked for visible emissions, reviewed facility records, and checked equipment at all six facilities. At the waste incinerator, the inspector observed a source test conducted in accordance with a state-approved sampling plan. The inspectors determined that while four facilities were in compliance, the other two facilities were out of compliance. At the steel mini mill, the inspector noted that the facility lacked records on the readings of visible pollutants; lacked records on the functioning of various control devices for a 22- lacked information on the sulfur content in certain raw materials used; did not maintain a certain emissions capture system, which caused visible emissions to leak from gaps and holes in the building. The inspector also noted an apparent disparity between the amount of nitrous oxides emitted annually from one furnace. For permit purposes in 1998, the facility estimated the amount was 31 tons. But on the basis of a 1999 source test, the amount was 94 tons. At the fiberboard manufacturing facility, the inspector noted a number of problems indicating the improper operation and maintenance of control equipment, including one improperly sealed vent and excess emissions from 7 of the facility’s 20 emissions control filters. In addition to observing six inspections, we reviewed state regulatory records for 23 large facilities to identify other methods used by regulators to identify excess emissions. For example, we noted that, at a metal-can- manufacturing facility, a state inspector was concerned about the amount of volatile organic compounds in the coating used to seal the cans. He arranged for samples of the coating to be analyzed at a state laboratory and allowed the facility to contract for its own analysis. Both the state lab and the company’s lab found that the facility was exceeding its permit limit for the volatile organic compound content of the coating by about 10 percent. Intensive Investigations Found Widespread Noncompliance In recent years, EPA has performed intensive investigations in four industries and identified widespread noncompliance. In three industries— electric utilities, pulp and paper mills, and wood products—these investigations focused primarily on compliance with New Source Review requirements. Under the Clean Air Act, facilities must obtain a New Source Review permit for new construction or major modifications that increase a facility’s emissions of certain regulated air pollutants. According to an EPA air enforcement official, routine inspections do not necessarily identify instances in which facilities have made physical or operating changes that could increase emissions and require them to revise their existing permits or obtain New Source Review permits. In the fourth industry—petroleum refining—EPA investigated compliance with both New Source Review requirements and regulations that require the monitoring of “fugitive emissions” leaking from valves, pumps, and other equipment. In the pulp and paper and wood products industries, EPA found widespread noncompliance. In the electric utility and petroleum refining industries, many of the companies investigated agreed to take remedial actions on the basis of EPA’s preliminary findings rather than actual findings of noncompliance. However, because EPA targeted facilities that were determined to be most likely to have violated their permits, the results of these intensive investigations may not represent conditions at other facilities. To identify industries on which EPA should focus its intensive investigations, agency staff analyzed industry-by-industry information on production levels, profits, and other factors that could help them identify industries with facilities that had increased production but may not have applied for new construction permits. Next, the EPA staff considered which facilities within those industries to focus on. They gathered and analyzed industry journals and other publicly available information about companies, as well as information in state agency files. The intensive investigations generally consisted of visiting the facility for 3 days to identify equipment, determine when it was installed, and evaluate the history of physical or other changes in the use of that equipment. EPA staff also obtained financial data for the facility to identify expenditures that may indicate an increase in production capacity. Afterwards, they spent from several months to a year analyzing this information to determine whether the facility violated Clean Air Act requirements. In the petroleum refinery industry, EPA also performed investigations to determine if facilities accurately reported the number of emissions leaks from valves, pumps, compressors, and other equipment. Federal regulations require refineries to monitor equipment for leaks on a routine basis and to fix leaking equipment. The failure to identify and fix these leaks can result in excess fugitive emissions of volatile organic compounds and other hazardous air pollutants. In the pulp and paper and wood products industries, EPA found widespread noncompliance. As shown in table 1, of the 96 facilities where EPA has completed investigations, 75 (about 78 percent) were not in compliance. Common types of violations included the failure to install pollution control devices (both industries), obtain New Source Review permits required by the Clean Air Act (both meet emissions limits (pulp and paper), and perform required testing (pulp and paper). According to an EPA Air Enforcement Division official, EPA took a different approach in the electric utility and refining industries. The official told us that EPA initiated investigations of specific facilities and then met with company officials to present its preliminary findings. This official also told us that, in many cases, the companies agreed to take such actions as installing pollution control equipment at one or more of their facilities on the basis of EPA’s preliminary findings rather than risk an actual finding of noncompliance. As of February 2001, EPA had reached three agreements covering 20 facilities in the electric utility industry and three agreements covering 19 facilities in the petroleum refining industry. According to an EPA air enforcement official, all of these facilities agreed to pay fines and install the pollution control equipment they would have been required to install had EPA formally found them in noncompliance. In return, according to the official, EPA agreed to resolve possible past violations at the facilities. EPA estimates that a recent settlement with one electric utility company will require the company to install control equipment and take other steps to reduce its emissions of sulfur dioxide and nitrogen oxides by 400,000 and 100,000 tons per year, respectively. At 17 refineries investigated for leaks of volatile organic compounds, EPA found a larger proportion of leaking emissions points and a larger volume of leaks than the companies reported. Specifically, whereas the companies reported finding leaks in 1.3 percent of the potential emissions points, EPA’s investigators found leaks in 5 percent. EPA estimated that annual fugitive emissions from the 17 refineries investigated could be more than 6,000 tons per year greater than previously believed. By extrapolating these findings, EPA estimated that refineries may be emitting an additional 40,000 tons of volatile organic compounds each year because leaks are not properly identified and repaired promptly. Four States’ Reviews of Emissions Reports Varied According to EPA officials who oversee state permit programs, because most title V permit programs assess emissions fees, at least in part, on the basis of the total tonnage of pollutants emitted by major sources, most major sources are required to submit annual reports listing their total emissions. In the four states included in our review, all major sources are required to report annually on their total emissions in the previous calendar year; in addition, the states require synthetic minors to report periodically on their total emissions. While many of the largest emitters, such as coal-powered electric utilities, must continuously measure their emissions of certain pollutants, most facilities rely primarily on estimates or extrapolations from source tests to determine their emissions. All four states in our study generally reviewed the facilities’ emissions reports for arithmetic errors but varied in the extent to which they verified the accuracy of data on which the facilities based their calculations. While the states did not track the extent to which they discovered errors, officials in one state that performed detailed reviews estimated that between one- third and one-half of all reports had to be resubmitted. Large Facilities Rely Primarily on Indirect Methods to Determine Their Level of Emissions According to EPA officials, the method used by a facility in determining its emissions depends on a number of factors, including the type of facility and the raw materials used. Methods range from direct measures of emissions to estimates based on emissions factors, as outlined below: Under the 1990 Clean Air Act Amendments, certain types of facilities must directly measure their emissions using continuous-emissions-monitoring systems (hereafter called “monitors”). Monitors constantly measure pollutants released by a single point, such as a smokestack within a facility. For example, EPA requires most coal-burning electric utilities and certain other types of facilities to use monitors to measure their emissions of certain pollutants. State regulators also have discretion to require other air pollution sources to use monitors. For example, a Pennsylvania state agency official told us that Pennsylvania has required the use of 445 monitors in addition to 327 monitors required by federal regulations. EPA officials consider monitors to be the most reliable method for determining annual emissions. According to an EPA official, extrapolations from the short-term data derived from source tests can, in some cases, be used to estimate long- term emissions from the tested facility or from similar facilities. EPA officials told us that short-term source tests are considered less reliable than monitors for determining long-term emissions. The limitations of source tests include their short duration and facilities’ common practice of performing the tests under optimal conditions, such as shortly after purchasing or servicing control equipment. Emissions factors are broad averages of the emissions of pollutants that can be expected, given the processes and/or pollution control equipment generally used in an industry. Facilities using emissions factors estimate their volume of emissions by multiplying their activity rate by the appropriate factor. For example, a facility that wants to estimate its carbon monoxide emissions from burning distillate oil in an industrial boiler would multiply the emission factor for that process (5 pounds of carbon monoxide for each thousand gallons of oil burned) by the quantity of fuel consumed. If the facility burned 3,000 gallons a day, its estimated carbon monoxide emissions would total 15 pounds a day. Because emissions factors represent average emissions, the level of emissions from some sources using them may be higher than the factor, while others may be lower. (App. I provides additional information on the development and reliability of emissions factors.) According to EPA and state agency officials, facilities use emissions factors to make most emissions determinations for the purpose of emissions reports. EPA’s nationwide data on emissions determinations made by both large and small facilities show that indirect methods that do not involve site-specific direct measurement were used in about 96 percent of all determinations, while direct measures, such as monitors and source tests, were used for about 4 percent of all determinations. Of the 96 percent involving indirect methods, emissions factors accounted for about 80 percent and other methods accounted for 16 percent. Similarly, most facilities in the states we visited relied on indirect methods. For example, about 63 percent of the emissions determinations from large industrial facilities located in North Carolina relied on EPA’s rated emissions factors. In Virginia, about 71 percent of the large facilities used emissions factors to determine emissions from at least one of their emission sources, while about 10 percent relied on monitors for at least one emission source. The percentage of emissions determinations made by a certain method may not equal the percentage of the total emissions that were quantified by that method. EPA does not track the quantities of emissions determined by each quantification method, but emissions data for electric utilities that must use monitors show that such facilities account for a large percentage of the emissions of certain pollutants. For example, while monitors are used for less than 5 percent of all emissions determinations nationwide, EPA’s data show that in 1998, electric utilities required to use monitors to measure their emissions of nitrogen oxides and sulfur dioxide accounted for about 24 percent of total national nitrogen oxide emissions and about 65 percent of total national sulfur dioxide emissions. States’ Methods of Verifying Emissions Reports Varied Each of the four states included in our study assesses major sources’ fees, at least in part, on the basis of the number of tons of pollution they emit. Each of the states requires similar information from facilities. The facilities typically provide detailed information on emissions from production lines or processes that are regulated in their permits. For example, one state requires facilities to provide, among other things, information on the raw materials they use, their operating schedule, the sulfur and energy content of fuels, the efficiency of pollution control devices, the method used to calculate emissions, and the tons of pollution emitted. In addition to providing the information described above, each state requires a company official to certify, under penalty of law, the report’s truth, accuracy, and completeness. Each of the four states uses the information contained in the reports to independently calculate each facility’s emissions and, in the three states where facilities provided estimates of total emissions, to compare the agency’s calculations of total emissions with those provided by the facility. (One state does not require facilities to estimate total emissions; instead, according to a state official, agency personnel use the information provided by the facilities to perform the calculations themselves.) All four states routinely compared the reports with those submitted in previous years to identify noteworthy changes that might indicate inaccurate reporting. While we found similarities in the states’ procedures for verifying the emissions reports in the four states, we also found variations, as shown below. In two states, the field inspector who performs the compliance inspection of a facility typically also reviews that facility’s emissions report. An official in one of these states explained that having the inspector with the greatest understanding of each facility review the report maximizes the agency’s ability to identify questionable data. In contrast, a third state assigns all reports for a certain facility type to one inspector; thus, the inspector reviewing the emissions report for a facility may not be the person who performed compliance inspections at that facility. In the fourth state, the personnel responsible for developing the state’s emissions inventory, rather than those who inspect facilities, review the reports. The state agencies also vary in the extent to which they seek to verify the data that facilities submit on the material they used or the removal efficiency of their control equipment. For example, officials in one state told us that they typically check for the use of appropriate emissions factors and pollution control efficiencies and review previous inspection reports and other relevant documents to ensure that facilities account for all emissions points. Alternatively, regulators in another state told us that they simply rely on facilities to provide accurate data. None of the four states maintain data on the type or number of inaccuracies found during their efforts to verify emissions reports. Regulators in all four states told us that those responsible for reviewing the reports contact the facility directly to resolve any problems or inaccuracies identified through the verification process. After resolving any questions about the report, the facility revises its statement as necessary. For example, officials in one state told us that they consider problems with the reports to be inadvertent, and that the inspector performing the review works with the facility to resolve the differences. Because state agency officials were unable to provide comprehensive data on the type or number of inaccuracies found, we asked them to estimate the proportion of all the reports submitted that had significant problems. One state provided a statewide estimate. Officials in this state, which performed detailed reviews of the reports, said that one-third to one-half of all its reports required corrections and resubmittal of the report by the facility. Officials in another state said that the agency’s regional offices verified the reports and that the thoroughness of the reviews varied across the regional offices. The regional office performing the most detailed reviews estimated that 80 percent or more of the reports had problems that required additional consultation with the facility, while the regional office performing the least thorough reviews found such problems with 10 percent of the reports. An official in the third state told us that there are few problems or missing data in reports from facilities that had reported previously, but that almost all reports from facilities reporting for the first or second time required follow-up because of incomplete data. Officials in the fourth state said that they relied on facilities to provide accurate data. EPA Plans to Improve Oversight of Compliance but Not Verification of Emissions Reports EPA has undertaken or is planning three initiatives to improve its oversight of compliance with the Clean Air Act but does not plan to enhance its oversight of state processes for verifying the accuracy of emissions reports. With respect to compliance, first, EPA developed and issued guidance to state regulators on the types of information that major sources must maintain to demonstrate their compliance with permits. Second, EPA is revising its compliance-monitoring strategy, which will grant states greater flexibility in their approaches to inspections and will encourage regulators to obtain more site-specific emissions data through the increased use of direct measurements via source tests. Third, EPA is training regional office staff and states to conduct intensive investigations. With respect to the emissions reports, EPA officials in headquarters and the two regions we visited all told us that EPA relies on the states to review these reports. At the same time, EPA has encouraged its regions to audit state programs for calculating emissions fees, which often depend in part on the amounts of emissions, but has not asked its regions specifically to evaluate states’ processes for verifying emissions reports. The two EPA regional offices we visited perform little oversight of their states’ verification processes. EPA Is Working to Improve Data Quality and Facility Monitoring EPA’s first initiative, in September 1998, was issuing guidance on the type of information that major sources must periodically gather and maintain to demonstrate their compliance with applicable air regulations. EPA sought to clarify its policies on self-monitoring by facilities and to encourage state agencies to consistently interpret these policies. According to EPA, the definition of “adequate monitoring” had been subject to interpretation, and the level and type of monitoring that state authorities required were not consistent. EPA’s guidance document states that facilities must maintain reliable, timely, and representative data on the status of their compliance. The document further states that the use of an emissions factor does not constitute adequate monitoring unless the factor was developed directly from the unit in question. In addition, the guidance encourages state authorities to require the use of monitors and indirect monitoring derived from periodic source tests. The implementation of EPA’s guidance has been suspended because of an April 14, 2000, ruling by the U.S. Court of Appeals. The court held that, in issuing the guidance, EPA, in effect, amended its monitoring regulation without complying with the necessary rule-making procedures. EPA did not appeal the decision and is currently evaluating other regulatory options to meet the same objectives. EPA’s second initiative, in March 2000, was issuing a draft national policy for state regulators to use in ensuring compliance with the act. This policy was developed in response to two reports that found problems with EPA’s air enforcement program. EPA’s Inspector General reported in 1998 that no one within the enforcement program was responsible for the oversight and implementation of the agency’s Clean Air Act compliance-monitoring program. The report described inconsistent implementation and disregard for agency directives as diminishing the effectiveness of the air enforcement program. The report also described cases where inspections conducted by state regulators did not meet EPA’s definition of a “routine inspection” or were documented poorly. In addition, a 1999 study commissioned by EPA found that most EPA regional offices did not adhere to the agency’s compliance-monitoring strategy. EPA’s draft policy states that it would, among other things, provide regulators with increased flexibility in the types of inspections they conduct and require sources with no better means of determining their emissions rates to conduct source tests. EPA’s compliance data administrator told us that the draft policy would also require states to provide EPA with information on annual compliance certifications and semiannual compliance-monitoring reports that are submitted by major sources. EPA’s Air Enforcement Division officials said that they were working with representatives of state agencies to revise the draft and that the agencies have expressed concerns over the document’s provisions for the increased use of source tests. EPA has revised the document to address these concerns, but the issue remains unresolved. In addition, EPA’s Air Enforcement Division officials said that they have not determined whether the final strategy will take the form of guidance (as originally proposed) or an administrative rule. They told us they plan to issue the document in April 2001. Finally, EPA’s Air Enforcement Division officials noted that EPA is encouraging personnel in its regional offices and the states to conduct intensive investigations to ensure compliance with New Source Review requirements. EPA Does Not Plan to Evaluate States’ Processes for Verifying Emissions Reports An EPA official who oversees state permit programs stated that the agency has not taken or proposed actions specifically intended to improve the accuracy of emissions reports from major sources, although one initiative has the potential to provide information on states’ review processes. In 1998, EPA encouraged its regional offices to review state permit authorities to determine whether, among other things, they were correctly implementing their fee programs and collecting sufficient fees to cover the costs of administering their title V permit programs. EPA developed and distributed to the regions an audit protocol for evaluating state programs. Although the audit protocol does not ask regions to determine whether permit programs have adequate controls in place to verify emissions reports, it does ask them to examine the documentation of how the annual fees are determined and to audit pollution sources’ bills, which most permit authorities—including those in all four of the states where we worked—based, at least in part, on each facility’s reported level of total emissions. An EPA official who oversees state permit programs told us that regions have full discretion in determining whether they use the audit protocol in evaluating the permit programs. An official in the Air Protection Division of EPA’s Philadelphia office stated that the regional office has used the audit protocol to review three permit programs in that region. One of these reviews found that the state audited did not verify emissions reports. Officials in the Environmental Accountability Division of EPA’s Atlanta office told us that they were not using the audit protocol in reviewing programs in their region or seeking to evaluate the processes in place for verifying emissions reports. While EPA does not plan to evaluate states’ processes for verifying emissions reports, it does check the quality of emissions data submitted by states for developing emissions inventories. This includes checking for data errors that could have affected emissions values, as well as, in some cases, comparing estimates with those submitted in previous years and with those from other facilities in the same industry. In addition, EPA posts facility-specific emissions data on the Internet for review by outside parties. Conclusion EPA performs limited oversight of states’ processes for verifying the accuracy of emissions reports submitted by major sources. EPA’s data show that most emissions determinations are based on generic emissions factors. While EPA allows facilities to estimate their emissions in this manner, EPA officials generally consider direct methods to be more reliable. The accuracy of these reports is important because they influence (1) the financing of states’ regulatory programs through fees and (2) the development of emissions inventories, which, in turn, assist regulators in developing control strategies and establishing permit limits. Furthermore, steps taken to assess the accuracy of these reports—such as more thoroughly reviewing the supporting information—could provide benefits in terms of compliance with Clean Air Act requirements. For example, a more thorough review of the information underlying a facility’s emissions reports or a more systematic comparison of these reports over a period of time could identify indications of increased emissions. Such indications could, in turn, trigger a review of compliance with New Source Review requirements, an area where EPA found widespread noncompliance in four industries. In the four states included in our review, the approaches taken to verify the accuracy of the reports varied significantly. The state that performed the most detailed reviews found widespread inaccuracies. However, EPA’s oversight of these processes is limited; the agency had audited only three permit authorities in the two EPA regions we visited and found that one of the three authorities had no process in place for verifying the accuracy of the emissions reports. While taking steps to improve its overall compliance-monitoring strategy, EPA does not plan to evaluate state processes for verifying emissions reports from large facilities. Recommendation for Executive Action To help ensure the accuracy of large facilities’ emissions reports, we recommend that the Administrator of EPA evaluate states’ programs to determine whether they have adequate mechanisms in place for verifying the accuracy of emissions reports. If the results of these reviews identify inadequacies, the Administrator should work with the states to improve their processes in order to provide reasonable assurance that facility reports are subject to thorough review. Agency Comments We provided EPA with a draft of this report for review and comment and received a letter from the Acting Assistant Administrator for Air and Radiation. (App. II contains the text of his letter, along with our detailed responses; in addition, EPA provided us with several clarifications, which we incorporated where appropriate.) The Acting Assistant Administrator questioned the intent of our recommendation, stating that if the intent is to improve the accuracy of emissions reports to ensure the sufficiency of fees that states collect to support their title V permit programs, EPA disagrees and believes the recommendation is unnecessary because the states can simply raise the fee rate (the fee per ton of emissions) if fee revenues prove insufficient; if the intent is to improve the emissions inventories used in state planning and in developing national inventories, EPA concurs; and if the intent is to improve compliance with applicable permit requirements, EPA disagrees because emissions reports are not intended to determine compliance with permit requirements. The intent of our recommendation, as stated in the draft report, is to help ensure the accuracy of emissions reports because of the role that the reports actually play or can play in all three areas: (1) setting fees to cover the costs of state programs; (2) developing state and national inventories and, concomitantly, strategies for further controlling emissions; and (3) potentially alerting state regulators to emissions levels that suggest noncompliance with operating permits or other air quality requirements. We agree that states facing a shortfall in fee revenues could simply increase the rate applied to all sources to raise aggregate fee revenue, but we do not agree that the accuracy of emissions reports used for fees is a secondary concern. Increasing the fees levied on facilities that accurately report their emissions as well as on those that underreport (who would continue to pay proportionately less than warranted on the basis of their relative contribution to total emissions) could lead to inequitable results. While states have latitude in their approach to collecting fees, most of them rely, at least in part, on each facility's level of reported emissions in calculating fees. Thus, a facility that reports more emissions will generally pay more in fees. Especially in the absence of state oversight, some facilities could view this system as an incentive to underreport their emissions and thus pay lower fees. Inconsistent or limited review of emissions reports reduces regulators’ ability to identify underreporting and sends the signal that facilities face little chance of detection if they choose to underreport. To the extent that any facility underreports its emissions and thus pays less than its fair share of title V fees, other facilities will pay more than their fair share. In the short run, this raises questions about the equity of the fees being charged. In the long run, this possibility—unless counteracted—could lead to more widespread underreporting and undermine the system of emissions reporting. In addition to helping ensure that emissions fees are collected equitably, more thorough state reviews could also help improve emissions inventories at the state and national levels. Finally, more thorough reviews could help EPA and state compliance efforts. Specifically, through its lengthy and resource-intensive investigations, EPA identified widespread potential violations of New Source Review requirements in all four of the industries it reviewed. We believe that more thorough reviews of facilities' emissions reports might have provided indications of such problems much earlier and at much less cost. Furthermore, emissions reports often contain information not only on total emissions but also on levels of production and raw material use. Many of the title V permits we reviewed had provisions that limit production levels as a surrogate for total emissions. EPA and state enforcement officials told us that reviewing this information would help inspectors evaluate a facility's overall compliance status. For example, as noted in our report, two of the states assign the same field inspector responsible for inspecting the facility for compliance to review the facility’s emissions report. This practice enhances the potential that any discrepancies between emissions reports and the results of compliance inspections will be detected. Scope and Methodology To fulfill our objectives, we interviewed officials from, and reviewed studies and other documents prepared by, EPA’s headquarters and regional offices and four states. The EPA headquarters offices were the Office of Air Quality Planning and Standards and the Office of Enforcement and Compliance Assurance. The two EPA regional offices were Region III (headquartered in Philadelphia), which generally covers the mid-Atlantic region, and Region IV (headquartered in Atlanta), which generally covers the Southeast. The states were Pennsylvania and Virginia in EPA’s Region III and Kentucky and North Carolina in EPA’s Region IV. The conditions in these two regional offices and four states may not represent the conditions in other regional offices and states. In addition, we accompanied EPA or state officials on their routine inspections of six facilities representing different industries—a chemical manufacturer, a diesel engine manufacturer, a fiberboard-manufacturing plant, a municipal waste incinerator, an absorbent material maker, and a steel mini mill. The conditions at these six facilities may not represent the conditions at other regulated facilities. In addition, as agreed with your office, we do not name the facilities in our report. We did not independently validate the data provided by EPA or the states. We conducted our review from November 1999 through March 2001 in accordance with generally accepted government auditing standards. As arranged with your office, we plan no further distribution of this report for 30 days from the date of the report unless you publicly announce its contents earlier. At that time, we will send copies to Senator Robert C. Smith and Senator Harry Reid in their respective capacities as Chairman and Ranking Member, Senate Committee on Environment and Public Works; Representative W.J. Tauzin and Representative John D. Dingell in their respective capacities as Chairman and Ranking Minority Member, House Committee on Energy and Commerce; Representative Dan Burton, Chairman of the House Committee on Government Reform; other interested Members of Congress; the Honorable Christine Todd Whitman, Administrator of EPA; the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget; the governors of the four states we visited; and other interested parties. We will make copies available to others upon request. If you have any questions about this report, please contact me or David Marwick at (202) 512-3841. Key contributors to this report were Philip L. Bartholomew, James R. Beusse, Michael Hix, Karen Keegan, and William F. McGee. Appendix I: Development and Reliability of Air Emissions Factors Regulators and regulated facilities use air emissions factors to estimate emissions from a variety of sources. Emissions factors are averages of the amount of emissions produced from a given process with given inputs, for example, the quantity of carbon monoxide generated per unit of oil burned in an industrial boiler. The Environmental Protection Agency (EPA) publishes information on air emissions factors. Regulators and industry use air emissions factors to assist in developing emissions inventories and control strategies, and for other purposes. For example, an EPA official told us that facilities use emissions factors to determine whether their estimated annual emissions place them in the major source category. The reliability of the emissions factors varies widely. EPA rates the reliability of emissions factors on a scale of A (excellent) to E (poor). These ratings, in turn, reflect four underlying criteria: the estimated reliability of the test data used, the randomness of the facilities from which the data were derived, the variability of emissions levels across the sources tested, and the number of facilities for which test data are available. Thus, the highest (A-rated) factors are those derived from high-quality data taken from many randomly chosen facilities with low variability among the sources. Conversely, the lowest (E-rated) factors are those derived from low-quality test data, when doubts exist regarding the randomness of the test facilities used, and when there is wide variability among the sources tested. As of October 1999, EPA had rated 12,390 factors in its compilation of emissions factors. As shown in table 2, 20 percent of the factors were rated “above average” or “excellent,” while 46 percent were rated “below average” or “poor.” Along with the rated factors, EPA maintains information on approximately 4,200 unrated factors (25 percent). In its compilation of emissions factors, EPA describes problems with the use of such factors to estimate emissions for individual facilities. Each factor is generally assumed to represent the long-term average for all facilities in a source category but may not reflect the variations within a category because of different processes and control systems used. The underlying data from which emissions factors are derived can vary by an order of magnitude or more. For example, the emission factor for petroleum conversion at oil refineries—45 pounds of particulate matter per thousand barrels of feedstock—is based on test results ranging from 7 to 150 pounds. EPA assigned this factor a B (above average) rating. Thus, facilities’ actual emissions can, and do, vary substantially from the published factors. Appendix II: Comments From the Environmental Protection Agency The following are GAO’s comments on the Environmental Protection Agency’s letter dated March 9, 2001. GAO’s Comments 1. EPA notes that permit authorities are required to charge fees that cover the costs of their regulatory programs and describes the accuracy of emissions reports used in that process as a secondary concern because state agencies could correct a shortfall in fees that results from the underreporting of emissions by raising the fee per ton of emissions. We believe that the accuracy of the emissions reports is integral, initially, to establishing an equitable fee structure and, later, to ensuring that each regulated entity is charged only its fair share of the overall fees. Emissions reports support both processes, and the steps we recommend are intended to help ensure the accuracy of these emissions reports. While EPA asserts that states should have discretion in how they assess emissions data to establish fees, it has already initiated oversight of these state efforts. In 1998, EPA distributed to its regional offices an “audit protocol” that they could use to monitor whether the permit agencies in their region had, among other things, established a proper fee structure and were submitting appropriate bills to regulated entities. As of fall 2000, only one of the two regions we visited had chosen to use the protocol. When EPA regional offices use the protocol to evaluate state programs, they could implement our recommendation by amending the protocol to include evaluating the states’ processes for verifying emissions reports. 2. We recognize that EPA performs quality assurance on data provided by states and have revised the report to acknowledge this. However, we continue to believe that a more thorough review of these reports at the state level could lead to more reliable local, state, and national emissions inventories. 3. We believe that thorough reviews of the reports could improve EPA’s and the states’ ability to identify noncompliance with New Source Review requirements and the terms of title V permits. While EPA states that these reports are seldom, if ever, used for determining compliance with title V permits, we believe that they contain information that could assist in doing so. Each state we visited said that the reports contain information on production levels and, in most cases, total emissions. Many of the title V permits we reviewed have provisions that limit production levels. Furthermore, the EPA and state enforcement officials we spoke with said that reviewing the emissions reports could help in evaluating a facility’s compliance status. Also, as our report notes, two of the states provide for a review of a facility’s emission report by the field inspector responsible for that facility. In that way, the review can be performed by the individual most knowledgeable of the facility and therefore best positioned to identify any irregularity in the report. 4. EPA misstates our conclusion. We concluded that more thorough reviews of the supporting information contained in emissions reports “could provide benefits in terms of compliance with Clean Air Act requirements,” not that “improving the accuracy of emissions reports will improve compliance with Clean Air Act requirements” (emphasis added). We view this as an important distinction. 5. As stated in comment 3, we believe that reviewing the information contained in the emissions reports could assist in identifying noncompliance with New Source Review and title V permits.
The Environmental Protection Agency (EPA) performs limited oversight of states' processes for verifying the accuracy of large industrial facilities' emissions reports. EPA's data show that most emissions determinations from large sources are based on generic emissions factors. Although EPA allows facilities to estimate their emissions in this manner, EPA officials generally consider direct methods to be more reliable. The accuracy of these reports is important because they influence (1) the financing of states' regulatory programs through fees and (2) the development of emissions inventories, which, in turn, help regulators to develop control strategies and establish permit limits. Furthermore, steps taken to assess the accuracy of these reports, such as more thoroughly reviewing the supporting information, could improve compliance with Clean Air Act requirements. For example, a more thorough review of the information underlying a facility's emission reports, or a more systematic comparison of these reports over time, could identify increased emissions. Such indications could, in turn, trigger a review of compliance with new source review requirements, an area in which EPA found widespread noncompliance in four industries. In the four states that GAO reviewed, the states that had done the most detailed reviews found widespread inaccuracies. Although it is taking steps to improve its overall compliance monitoring strategy, EPA does not plan to evaluate states' processes for verifying emissions reports from large facilities.
Background PEPFAR Leadership and Implementation The Department of State’s Office of the U.S. Global AIDS Coordinator (OGAC) establishes overall PEPFAR policy and program strategies, coordinates PEPFAR programs, and allocates resources to several U.S. agencies to implement PEPFAR activities. These agencies (referred to in this report as implementing agencies) include, among others, the U.S. Agency for International Development (USAID) and the U.S. Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC). OGAC coordinates U.S. government implementing agencies and resources, establishes policy and guidance for the PEPFAR program, and is responsible for allocating resources to implementing agencies. OGAC executes its coordinating role in part by providing implementing agencies, both in the United States and in PEPFAR countries, annual guidance on reporting program results, and guidance on planning. In addition, OGAC collaborates with implementing agency officials through technical working groups on a range of issues. OGAC also disseminates weekly updates to implementing agency staff in PEPFAR countries regarding topics such as deadlines and changes to official guidance. USAID and CDC, which oversee most PEPFAR-funded programs, are among PEPFAR’s primary implementing agencies. Of almost $16.5 billion obligated for HIV/AIDS activities in fiscal years 2004 through 2009, $9.6 billion was obligated by USAID and $6.4 billion was obligated by HHS. In each partner country, teams of implementing agency officials (PEPFAR country teams) jointly develop country operational plans (COP) for use in coordinating, planning, reporting, and funding PEPFAR programs. The COP is the vehicle for documenting annual investments in HIV/AIDS, and serves as the basis for approving, allocating, tracking, and notifying Congress of budgets and targets. U.S. Policy Documents Endorsing PEPFAR Alignment or Country Ownership 2008 Leadership Act. The 2008 Leadership Act, PEPFAR’s reauthorizing legislation, cites improving harmonization of U.S. efforts with national strategies of partner governments and other public and private entities as an element in strengthening and enhancing United States leadership and the effectiveness of the U.S. response to HIV/AIDS. The act requires the President to report to Congress on OGAC’s strategy. The act specifies that the report must discuss many elements of the strategy including a description of the strategy to promote harmonization of U.S. assistance with that of other international, national, and private actors; and to address existing challenges in harmonization and alignment. The act also requires the President to report on efforts to improve harmonization, in terms of relevant executive branch agencies, coordination with other public and private entities, and coordination with partner countries’ national strategic plans. Paris Declaration. In 2005, 133 countries and territories, including the United States, and 28 participating international organizations, endorsed the Paris Declaration on Aid Effectiveness, an international agreement committing countries to increase efforts in supporting country ownership, harmonization, alignment, results, and mutual accountability. Specifically, donors committed to taking a number of steps to implement the principles of the Paris Declaration: to respect partner country leadership and help strengthen their capacity to exercise it; base support on national strategies; implement common arrangements for reporting to partner governments on donor activities and aid flows; harmonize monitoring and reporting requirements; and provide timely, transparent, and comprehensive information on aid flows to enable partner authorities to present comprehensive budget reports to their legislatures and citizens. Three Ones. In 2004, key donors, including the United States, reaffirmed their commitment to strengthening national HIV/AIDS responses led by the affected countries themselves and endorsed the “Three Ones” principles. These principles aim to achieve the most effective and efficient use of resources and greater collaboration among donors in order to avoid duplication and fragmentation. Specifically, the donors agreed to base support on one HIV/AIDS action framework that provides the basis for coordinating the work of all partners, one national AIDS coordinating authority with a broad multisectoral mandate, and one country-level monitoring and evaluation system in each country. PEPFAR 5-year strategy. PEPFAR’s updated 5-year strategy, released in 2009 as mandated by the 2008 Leadership Act, highlights alignment with national strategies as a key component of promoting sustainability of U.S.- supported HIV/AIDS efforts through partner country ownership. In the first 5 years of the program, PEPFAR focused on establishing and scaling up prevention, care, and treatment programs. During the second 5-year phase, PEPFAR will focus on transitioning from an emergency response to promotion of sustainable country programs. PEPFAR’s emphasis on country ownership includes ensuring that the services PEPFAR supports are aligned with the national plans of partner governments and integrated with existing health care delivery systems. The new 5-year strategy acknowledges that during the first phase of PEPFAR, PEPFAR implementation did not always fully complement existing national structures and some PEPFAR programs and services were established apart from existing health care delivery systems. The new strategy affirms the principles of the Paris Declaration and states that PEPFAR is working with its multilateral and bilateral partners to align responses and support countries in achieving their nationally defined HIV/AIDS goals. PEPFAR Partnership Frameworks The Leadership Act authorized the U.S. government to establish partnership frameworks with host countries to promote a more sustainable approach to combating HIV/AIDS, characterized by strengthened country capacity, ownership, and leadership. Partnership frameworks are 5-year joint strategic agreements for cooperation between the U.S. government and partner governments to combat HIV/AIDS in the partner country through technical assistance, support for service delivery, policy reform, and coordinated funding commitments. PEPFAR guidance states that the partnership framework process should involve significant collaboration with the partner government and may also include active participation from other key partners from civil society, community-based and faith-based organizations, the private sector, other bilateral and multilateral partners, and international organizations. PEPFAR guidance further states that a key objective of the partnership framework is to ensure that PEPFAR programs reflect country ownership, with partner governments at the center of decision making, leadership, and management of their HIV/AIDS programs and national health systems. The expectation is that at the end of the partnership framework, in addition to achieving results in HIV/AIDS prevention, treatment, and care, partner country governments will be better positioned to assume primary responsibility for the national responses to HIV/AIDS in terms of management, strategic direction, performance monitoring, decision making, coordination, and, where possible, funding support and service delivery. The partnership framework is meant to support government coordination of different funding streams under the framework of a national strategy. The partnership framework should be fully in line with the national HIV/AIDS plan of the country and emphasize sustainable programs with increased country decision-making authority and leadership. PEPFAR guidance defines the partnership framework as consisting of two interrelated documents, the partnership framework and the partnership framework implementation plan. The partnership framework is to focus on establishing a collaborative relationship, negotiating the overarching 5- year goals of the framework and the commitments of each party, and setting forth these agreements in a concise signed document. The partnership framework implementation plan is to include a more detailed description of the approach to supporting increased country ownership, baseline data, specific strategies for achieving the 5-year goals and objectives, and a monitoring and evaluation plan. PEPFAR Country Operational Plans The COP is used for planning annual U.S. investments in HIV/AIDS and approving annual U.S. bilateral HIV/AIDS funding, and it serves as the annual work plan for PEPFAR activities. The COP database, which houses all COP information submitted by PEPFAR country teams, provides information for funding review and approval and serves as the basis for congressional notification, allocation, and tracking of budget and targets. According to OGAC, PEPFAR country teams in 31 countries completed COPs for fiscal year 2010. In addition three regions developed and submitted regional operational plans for fiscal year 2010: Caribbean, Central America, and Central Asia. The COP development process involves interagency coordination as well as consultation with other PEPFAR stakeholders. The U.S. Ambassador leads the development of COPs, which are created through a collaborative process involving PEPFAR country teams. The COP development process also involves collaboration with country and international partners in an annual review and planning process. According to PEPFAR COP guidance, developing an annual COP provides an opportunity to bring the U.S. country team together with partner government authorities, multilateral development partners, and civil society as an essential aspect of effective planning, leveraging resources, and fostering sustainability of programs. The draft COPs are ultimately reviewed by interagency headquarters teams, which make recommendations to OGAC regarding final review and approval. PEPFAR 2010 COP guidance notes that PEPFAR programs should be fully in keeping with developing countries’ national strategies and that PEPFAR country teams should identify areas of partner countries’ national HIV/AIDS programs for U.S. government investment and support. The guidance also states that the U.S. government is firmly committed to the principles of alignment with national programs, including alignment with other international partners. National HIV/AIDS Strategies At the 2001 United Nations General Assembly Special Session on HIV/AIDS (UNGASS), member countries committed to developing multisectoral HIV/AIDS strategies and finance plans. In our four case study countries— Cambodia, Malawi, Uganda, and Vietnam—the multisectoral strategy serves as a multiyear broad outline of its HIV/AIDS prevention, treatment, and care objectives. While a national commission may be the lead coordinating authority for HIV/AIDS policy and programs, the development and implementation of such a strategy can also involve many government ministries and offices. Additional strategy documents, such as sector-specific strategies and HIV program-specific strategies or action plans can also provide further guidance for national programs to combat HIV/AIDS (see table 1 for information on national HIV/AIDS strategies in four countries). Other government ministries and agencies, such as the Ministry of Health, may also be charged with implementing sector- or program-specific strategies and programs. PEPFAR Programs Generally Support Partner Countries’ National HIV/AIDS Strategies PEPFAR activities generally support the goals laid out in partner countries’ national HIV/AIDS strategies. Our analysis of PEPFAR documents and national strategies and discussions with PEPFAR country teams in the four countries we visited showed overall alignment between PEPFAR activities and the national strategy goals. In addition, PEPFAR officials—including officials at OGAC, USAID, and CDC in headquarters and in four countries—as well as partner government ministry officials, other HIV/AIDS donors, and civil society representatives whom we interviewed also said that PEPFAR activities generally support the goals and objectives set forth in national strategies. According to PEPFAR officials, a number of factors may influence the degree to which PEPFAR activities align with national strategy goals. As a result, PEPFAR may support activities to achieve some, but not all, goals and objectives outlined in national strategies. Conversely, PEPFAR may support activities not mentioned in the national HIV/AIDS strategy but that are addressed in relevant sector- or program-specific strategies. PEPFAR country teams have engaged in various efforts to help ensure that PEPFAR activities support the achievement of national strategy goals, including assisting in developing national strategies, participating in formal and informal communication and coordination meetings, engaging regularly with partner country governments during the COP development process, and developing new partnership frameworks. PEPFAR and Country Documents and Statements by PEPFAR and HIV/AIDS Stakeholders Indicate Alignment of Program Activities with National HIV/AIDS Goals Our analysis shows that PEPFAR activities described in the 2010 COPs for Cambodia, Malawi, Uganda, and Vietnam directly or partially address most of the goals and objectives outlined in the countries’ national HIV/AIDS strategies. (See table 2.) Statements and analysis by a number of PEPFAR and HIV/AIDS stakeholders further indicate that PEPFAR program activities are aligned with partner countries’ HIV/AIDS strategies. PEPFAR officials—including officials at OGAC, USAID, CDC, and HHS—and other HIV/AIDS stakeholders and experts operating at a global level, as well as partner government ministry officials, other donors, civil society representatives, and PEPFAR officials in four countries told us that PEPFAR activities are aligned with the goals and objectives outlined in partner countries’ national strategies and support the overall national program. Moreover, a 2007 Institute of Medicine (IOM) review of PEPFAR in the 15 focus countries also found that PEPFAR programs were generally congruent with these countries’ national strategies. IOM reported that partner government representatives in the 13 countries they visited generally expressed satisfaction with the level of alignment between PEPFAR and national strategies. PEPFAR Officials Noted Several Factors Influencing Alignment of PEPFAR Activities with National Strategy Goals Several factors may influence the degree to which PEPFAR activities align with national HIV/AIDS strategy goals, according to PEPFAR officials. Other partner activities. PEPFAR country programs are planned with consideration of other donors’ and groups’ activities in the countries, and therefore PEPFAR activities may not address all national strategy goals. In many PEPFAR countries a number of other bilateral and multilateral development partners also fund and implement programs to support the national program. Country team officials noted that in planning PEPFAR programs, they coordinate with other partners so that PEPFAR and partner activities will complement, rather than duplicate, one another and together support the national program. For example, the PEPFAR Malawi team explained that although the Malawi national strategy contains a goal of expanding workplace programs on HIV and AIDS in the public and private sectors and civil society, the 2010 PEPFAR Malawi COP does not include activities that directly address this goal because other donors and groups are implementing programs that address it. Size of PEPFAR program. The portion of a national strategy supported by PEPFAR activities also depends in part on the size of the PEPFAR program in that country relative to other donors’ activities in the country. For example, OGAC and country team officials told us that PEPFAR is more likely to cover larger portions of the national strategy in former focus countries where PEPFAR is generally the largest donor of HIV/AIDS funds. This corresponds with our finding that in the 2010 COPs for former focus countries Uganda and Vietnam, where U.S. funding makes up a large share of the national HIV/AIDS response—75 percent in Uganda and 59 percent in Vietnam from 2004 to 2008—the activity descriptions directly address most national strategy goals and objectives. OGAC and PEPFAR country team officials also noted that in non-focus countries, PEPFAR programs may support the achievement of priority goals, rather than cover every national strategy goal. For instance, in the non-focus countries Cambodia and Malawi, where U.S. funding makes up a smaller share of the national HIV/AIDS response—47 percent in Cambodia and 22 percent in Malawi from 2004 to 2008—we found that PEPFAR activities generally supported national strategy goals by filling resource gaps and focusing on interventions in which country teams have technical expertise. Policy restrictions. PEPFAR may not support particular activities because of PEPFAR policy restrictions or other conflicts. For example, according to country team officials in Vietnam, until recently PEPFAR funds could not be used to support needle exchange programs for intravenous drug users. As a result, PEPFAR has not supported this component of Vietnam’s national strategy. PEPFAR programs also may involve activities that are not specifically addressed in the national strategy but that support national strategy goals. In the four countries we visited, PEPFAR officials, government officials, donors, and PEPFAR implementing partners generally agreed that national strategies outline broad principles, goals, and objectives rather than specific programs or activities. According to these officials, the general nature of the national strategies allows flexibility to support specific programs to achieve these goals and respond to countries’ evolving HIV/AIDS epidemics. For example, according to PEPFAR officials, the Malawi PEPFAR program has prioritized male circumcision for many years as an effective means of preventing the spread of HIV, although this activity was not mentioned in Malawi’s previous national strategy. However, PEPFAR officials told us that these programs support Malawi’s broad goal to reduce the number of new infections. Moreover, as a result of the country team’s working with the Malawi government and sharing information and data, male circumcision has since been incorporated into Malawi’s most recent strategy. Similarly, in Uganda, PEPFAR supports prevention and treatment activities for a potentially high-risk target group, men who have sex with men, although Uganda’s national strategy does not address prevention and treatment for this group. PEPFAR officials told us they consider these activities aligned with Uganda’s high-level goal to reduce the number of new infections and treat HIV-positive patients. PEPFAR team officials in the four countries we visited told us they take into account sector- or program-specific subcomponents of national strategies—such as a protocol for prevention of mother-to-child transmission of HIV—as well as relevant epidemiological and evaluation data, all of which may be more up to date or detailed than the broad national HIV/AIDS strategy. PEPFAR Stakeholders Reported Various Efforts to Align PEPFAR Activities with National Strategy Goals PEPFAR country teams and other stakeholders described several means by which the country teams work to achieve alignment of PEPFAR activities with partner country HIV/AIDS goals. Participation in development of national strategies. PEPFAR country teams actively participate in the development and revision of partner countries’ national HIV/AIDS strategies, according to PEPFAR officials, partner government officials, and civil society groups. When host governments are developing or reformulating their strategies, they often invite HIV/AIDS stakeholders in the country, including bilateral and multilateral donors and civil society and private sector groups, to participate in the strategy’s development. As part of this process, according to PEPFAR officials in headquarters, the PEPFAR country team often participates heavily in the development of such strategies through direct advising as well as technical assistance through implementing partners. For example, the CDC officials in-country often help with surveillance activities and providing data to the host government in order to base the strategy on the most updated information on the epidemic. PEPFAR officials and other stakeholders in three of the four countries we visited also spoke about heavy PEPFAR involvement in the development of the strategies in those countries. These officials told us that PEPFAR’s participation in these processes both improves the quality of the national strategy and creates buy-in among program stakeholders, ultimately enhancing PEPFAR alignment with national strategies. PEPFAR country team officials also told us that national strategy time frames may affect PEPFAR’s ability to align its programs. For example, in Malawi, PEPFAR country officials were able to generate the 2010 COP based on Malawi’s newly revised and updated multisectoral national strategy. Conversely, PEPFAR officials in Cambodia told us that Cambodia’s outdated strategy, which was undergoing revision at the time of COP development and submission, complicated the country team’s ability to base the current year COP on the dated strategy. Meetings with partner governments and other stakeholders. PEPFAR country team participation in periodic meetings with partner country government officials, other donors, and civil society organizations helps to ensure that PEPFAR program activities support national strategies, according to PEPFAR officials and other HIV/AIDS stakeholders. Country team officials, partner government officials, and other donor representatives in the four countries we visited told us that PEPFAR country team officials participate in periodic advisory and technical area meetings with government officials and other donor representatives. For example, in the four countries we visited, we heard that PEPFAR officials participate in HIV/AIDS or health sector committees, which generally are led by the host government and include other relevant donors. In addition, PEPFAR officials participate in government-led technical working groups focused on specific HIV/AIDS-related areas, such as prevention of mother- to-child transmission or monitoring and evaluation. Informal engagement with partner government officials. Regular informal engagement with partner country government officials helps PEPFAR country teams to be aware of the needs and goals of the national HIV/AIDS program, according to PEPFAR country team officials. For example, the officials noted that in-country CDC staff are embedded in the Ministry of Health and thus have daily interaction with partner government officials. This daily communication helps the PEPFAR team focus on the needs of the partner government and align its activities with such needs. Country team officials also noted the importance of other regular interaction and communication between PEPFAR officials and partner government officials. For example, regular interaction with a number of ministry officials involved in the national HIV/AIDS program enables the PEPFAR team to better coordinate with the national program. COP development process. PEPFAR country teams engage with country officials and implementing partners throughout the annual COP development process, according to PEPFAR officials, partner government officials, and civil society groups. PEPFAR guidance states that developing the annual COP provides an opportunity to share information with partner government officials, which is an essential aspect of effective planning. In the four countries we visited, officials from ministries including the national AIDS authority and Ministry of Health told us that they had discussed the fiscal year 2010 COP with PEPFAR officials. PEPFAR country team officials and implementing partners in the four countries also told us that the country teams share information with their implementing partners in a collaborative process during the annual COP development process. For example, in the four countries we visited, PEPFAR officials told us they convened technical working group meetings of PEPFAR, partner government, and implementing partner officials throughout the COP process. Through these technical working groups and ongoing collaboration throughout the COP development process, implementing partners are able to provide input on the PEPFAR program and alignment with national strategies. Partnership framework development. Development of partnership frameworks has had a positive effect on PEPFAR alignment and coordination with other donors, according to OGAC, USAID, and CDC officials and other PEPFAR stakeholders. OGAC officials reported in June 2010 that 24 countries and two regions had been invited to develop partnership frameworks and that 7 of these countries, as well as both regions—Angola, Caribbean, Central America, Ghana, Kenya, Lesotho, Malawi, Swaziland, and Tanzania—had completed and signed a framework document. PEPFAR officials—including OGAC, USAID, and CDC officials—told us that partnership framework development in these countries created a vehicle for more open dialogue among PEPFAR, the country governments, and other donors. PEPFAR officials also stated that alignment of PEPFAR activities with these countries’ national HIV/AIDS strategies improved as a result of close interaction with a range of stakeholders. Likewise, during our visit to Malawi, PEPFAR and government officials, as well as other donors, noted improvement in PEPFAR alignment with national strategies as well as coordination with other donors’ HIV/AIDS programs as a result of the partnership framework development process. In addition, our review of the Malawi partnership framework showed that the goals and objectives are closely aligned with those laid out in the national strategy. However, OGAC officials noted that the impact of partnership frameworks on country ownership remained to be seen. As of August 2010, Malawi had completed and signed a partnership framework implementation plan. PEPFAR Stakeholders Noted Several Factors That Can Hinder PEPFAR Alignment with National Strategies PEPFAR stakeholders highlighted several factors that can make it difficult to align PEPFAR activities with national HIV/AIDS strategies. First, PEPFAR indicators sometimes differ from indicators used by partner countries and other international donors. Second, gaps may exist in the sharing of PEPFAR information with partner country governments and other donors. Third, lack of country leadership and capacity to develop strategies and manage programs affects PEPFAR country teams’ ability to ensure that PEPFAR activities align with national strategy goals. Fourth, OGAC’s guidance to PEPFAR country teams on developing partnership frameworks and implementation plans does not include indicators for measuring progress toward country ownership. Differences between PEPFAR Indicators and National and International Indicators Many PEPFAR stakeholders noted differences between PEPFAR performance indicators and national and international performance indicators. Other PEPFAR stakeholders, including partner country officials, other donors, and PEPFAR implementing partners in the four countries we visited highlighted difficulties in harmonizing PEPFAR indicators with the national indicators, owing to variance between indicator definitions and reporting time frames used to collect and report data. For example, according to Vietnamese government officials, PEPFAR defines orphans and vulnerable children using different age groupings than the government of Vietnam. In addition, other HIV/AIDS stakeholders and experts noted that PEPFAR often relies on indicators that can be compiled to report globally but may differ from those used by individual countries. A PEPFAR official also noted that national strategy indicators may not always align with international indicators. Moreover, PEPFAR’s 5-year strategy states that PEPFAR’s extensive performance reporting requirements were not always harmonized with other international indicators. The PEPFAR strategy also states that PEPFAR will support transition to a single, streamlined national monitoring and evaluation system. To address this problem, OGAC published an updated guide for indicators in August 2009, intended to increase both the inclusion of quality PEPFAR indicators and the alignment of such indicators with those of other development partners. OGAC collaborated with international donors and organizations including the Global Fund, UNAIDS, WHO, and UNICEF to align most PEPFAR- essential indicators with international standards. Specifically, OGAC is working internationally with multilateral partners to achieve a minimum core set of global reporting indicators that provides standardized data for comparison across countries and allows for aggregation at the global level. According to PEPFAR guidance, through the UNAIDS Monitoring and Evaluation Reference Group, OGAC and 18 other international multilateral and bilateral agencies have agreed on a minimum set of standardized indicators. In addition, PEPFAR will continue to work with this group on global harmonization of indicators. OGAC’s updated indicator guidance also notes that a second wave of recommended indicators will be released in 2010, providing additional indicators that PEPFAR country teams may choose to monitor at a country level. Gaps in Partner Countries’ Access to PEPFAR Information Some partner government officials told us they lack information about PEPFAR programs and funding in their country and expressed concern over this lack of access to PEPFAR data. For example, government officials in Vietnam reported they do not have sufficient information on PEPFAR spending and are not able to fully account for PEPFAR funding to local civil society organizations. In addition, in one country we visited, officials from some ministries told us they had not received copies of the COP. However, according to PEPFAR officials, this may be caused by lack of information sharing within or among the partner government ministries and agencies. UNGASS 2010 progress reports for the four countries we visited, which detail the progress in the national HIV/AIDS response, appear to include PEPFAR funding information, indicating that PEPFAR had shared such information with the partner governments. However, two of these countries’ 2008 UNGASS progress reports included estimated or partial information on PEPFAR activities and aid flows; all four countries’ reports noted difficulties in obtaining international donors’ HIV/AIDS spending data. In addition, IOM reported in 2007 that other donors had expressed concern about the degree of information on PEPFAR programs that could be shared due to procurement rules. PEPFAR’s 5-year strategy states that PEPFAR is committed to transparent reporting of investments and notes that opportunities exist to improve reporting mechanisms. The strategy also states that PEPFAR will work to expand publicly available data. According to COP guidance, the extent to which the information in the COP can be shared with stakeholders is limited because procurement-sensitive information must be protected to adhere to U.S. competitive acquisition and assistance practices. Capacity Limitations in Partner Country Governments Limited resources and partner country capacity to develop, lead, and implement the national HIV/AIDS program affects PEPFAR’s ability to effectively coordinate with the host country government, according to PEPFAR officials in headquarters and in the countries we visited. PEPFAR officials, as well as donors, PEPFAR implementing partners, and other HIV/AIDS stakeholders, mentioned one or more of the following challenges to engaging with partner governments: unwillingness or inability to commit resources, public corruption and financial mismanagement, and lack of technical expertise. PEPFAR’s 5-year strategy states that PEPFAR will work to assist partner governments, in part through technical assistance and mentoring, to support increases in government sustainability and partner country capacity. The strategy also notes that full transition to partner country ownership and increased financing will take longer than 5 years to achieve. Guidance for Measuring Progress of Partnership Frameworks Does Not Include Metrics of Country Ownership PEPFAR guidance on developing partnership frameworks and implementation plans includes detailed instructions for developing baseline assessments of partner countries’ HIV/AIDS epidemics and of efforts to respond to the epidemics. For example, the guidance directs PEPFAR country teams to measure these efforts’ outputs or outcomes, such as the number of newly trained healthcare workers. However, the guidance does not address the establishment of baselines, including indicators, for measuring progress toward country ownership—one of OGAC’s stated goals for the frameworks. In keeping with various Paris Declaration resolutions, the guidance that OGAC has provided to PEPFAR country teams for developing the frameworks describes promotion of country ownership as expanding partner government’s capacity to plan, oversee, manage, deliver, and eventually finance HIV/AIDS programs. The guidance requires country teams to link partnership framework goals with partner countries’ national HIV/AIDS and health strategies and states that partnership frameworks should emphasize sustainable programs with increased country decision-making authority and leadership. The guidance also specifies that the framework should outline plans to assess progress in achieving the goals agreed to in the partnership framework, including country ownership. However, the guidance does not provide instructions for developing indicators needed to establish baseline measures of country ownership and to assess progress toward this goal. According to an OGAC official, OGAC has not yet devised an approach for developing such indicators or for measuring progress toward country ownership. Moreover, developing indicators to measure aspects of country ownership, such as capacity to plan, oversee, manage, deliver, and eventually finance HIV/AIDS programs, can be—as has been recognized by development experts—a difficult and complex undertaking. An OGAC official acknowledged that generating such indicators would involve a process of working with development partners and PEPFAR country teams to develop a consensus on both definitions and measurements. Prior GAO work suggests that performance reports are likely to be more useful if they provide baseline and trend data. By providing baseline and trend data—which show an agency’s progress over time—the agency can give decision makers a more historical perspective within which to compare the year’s performance with performance in past years. PEPFAR country teams that begin implementing partnership frameworks without baseline assessments of country ownership will have limited ability to track progress and make necessary adjustments to the frameworks. Conclusions PEPFAR’s commitment to the principles of alignment with national HIV/AIDS strategies and country ownership of U.S.-supported programs is reflected in the new 5-year PEPFAR strategy and in OGAC guidance to PEPFAR country teams. According to our analysis of PEPFAR and national strategy documents as well as interviews with multiple PEPFAR stakeholders, PEPFAR efforts to align its activities have resulted in programs that are generally supportive of partner countries’ national strategy goals and objectives. In addition, the partnership frameworks that OGAC recently introduced are designed to, among other goals, enhance partner country ownership of PEPFAR programs. In particular, OGAC expects that at the conclusion of the 5-year partnership frameworks, country governments will be better positioned to assume primary responsibility for national responses to HIV/AIDS in terms of management, strategic direction, performance monitoring, decision making, coordination, and, where possible, funding support and service delivery. OGAC also expects the development of partnership frameworks to ultimately enhance alignment of PEPFAR programs with national HIV/AIDS strategies. In Malawi, PEPFAR stakeholders, including PEPFAR and partner government officials, as well as other donors, observed that the partnership framework development process improved alignment with national strategies as well as on coordination with other donors. However, OGAC has not yet established an approach for PEPFAR country teams to use in developing indicators needed for baseline measurements of country ownership, although the development of such indicators and baselines is recognized as difficult and complex. Without these indicators and baselines, country teams that implement the frameworks may be constrained in their ability to measure progress in promoting country ownership and to make adjustments to the frameworks to enhance such progress. Recommendation for Executive Action To enhance PEPFAR country teams’ ability to achieve the goal of promoting partner country ownership of U.S.-supported HIV/AIDS activities, we recommend that the Secretary of State direct OGAC to develop and disseminate a methodology for establishing indicators needed for baseline measurements of country ownership prior to implementation of partnership frameworks. Agency Comments and Our Evaluation Responding jointly with HHS and USAID, State provided written comments on a draft of this report (see app. VI for a copy of these comments). In addition, State’s OGAC, in coordination with HHS and USAID as well as the PEPFAR country teams in Cambodia, Malawi, Uganda, and Vietnam, provided technical comments, which we incorporated as appropriate. In their joint written comments, State, HHS, and USAID concurred with our findings and recommendation to develop a methodology for establishing baseline measures of country ownership. The joint written comments also note that the departments plan to incorporate such a methodology into the broader Global Health Initiative, in consultation with their field offices. We are sending copies of this report to the Secretary of State, the Office of the Global AIDS Coordinator, USAID Office of HIV/AIDS, HHS Office of Global Health Affairs, and CDC Global AIDS Program. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Scope and Methodology In response to a directive in the 2008 Leadership Act, this report (1) examines alignment of the President’s Emergency Plan for AIDS Relief (PEPFAR) programs with partner countries’ HIV/AIDS strategies and (2) describes several challenges related to alignment of PEPFAR programs with the national strategies or promotion of partner country ownership. To identify guidance for alignment of U.S. programs to national programs and country ownership, we reviewed the Tom Lantos and Henry J. Hyde United States Global Leadership Against HIV/AIDS, Tuberculosis, and Malaria Reauthorization Act of 2008 (2008 Leadership Act); the previous and current PEPFAR 5-year strategy; the Paris Declaration on Aid Effectiveness (Paris Declaration); the “Three Ones” principles; PEPFAR partnership framework guidance; and fiscal year 2010 country operational plan (COP) guidance. To examine the extent to which PEPFAR programs support the goals laid out in partner countries’ national strategies and to identify country teams’ challenges in aligning PEPFAR programs with national strategies and promoting country ownership, we performed the following: Interviewed PEPFAR officials, including the Office of the U.S. Global AIDS Coordinator (OGAC), Centers for Disease Control and Prevention (CDC), and U.S. Agency for International Development (USAID); and U.S. Department of Health and Human Services (HHS) officials in Washington, D.C., and Atlanta, Georgia, using a questionnaire regarding alignment of PEPFAR programs globally with national strategies at three levels: goals and objectives, program activities, and indicators. Interviewed representatives of other key PEPFAR stakeholders, including the Joint United Nations Programme on HIV/AIDS (UNAIDS); the Global Fund to Fight AIDS, Tuberculosis and Malaria; the Center for Global Development; and the Bill & Melinda Gates Foundation, regarding global PEPFAR alignment at these three levels. Analyzed U.S. agency documents, including guidance and strategy documents, and performed a literature review of other studies that examined PEPFAR alignment with national strategies. Among these studies was a 2007 Institute of Medicine (IOM) study that reviewed a number of aspects of PEPFAR implementation in all 15 focus countries, including alignment with national programs. The IOM review involved discussions with PEPFAR officials and other stakeholders and an analysis of PEPFAR documents as well as field visits to 13 of the 15 countries. Conducted case studies in Cambodia, Malawi, Uganda, and Vietnam. This work included assessing the level of correspondence between goals and objectives laid out in the national multisectoral HIV/AIDS strategy and the 2010 PEPFAR COP for each country. During our visits to these countries, we conducted semi-structured interviews with PEPFAR country team officials, including the PEPFAR coordinator in each country as well as USAID and CDC officials. We also met with partner government officials in various ministries involved in the national HIV/AIDS program in each country. In addition, we interviewed representatives of other international donors working in HIV/AIDS and of PEPFAR implementing partners in each country. With each of these groups, we conducted semi-structured interviews regarding PEPFAR support for the national strategy at three levels: goals and objectives, program activities, and indicators. To select the four countries for case studies, we considered a number of factors, including funding levels, geographic diversity, progress in developing partnership frameworks, and focus country status. Regarding funding levels, the four countries we selected represent both high and mid- range levels of PEPFAR funding. Regarding geographic diversity, the four countries represent variations in the epidemic and programs that exist across regions, including Africa and Asia. Regarding progress in developing partnership frameworks, the four countries were at different phases, enabling us to observe the impact of the partnership framework development process on alignment. Regarding focus country status, two of the four countries we selected were focus countries during the first phase of PEPFAR, while the other two were not. Although OGAC has noted that there will no longer be a distinction between PEPFAR focus countries and non-focus countries, we theorized that differences in programming and alignment might exist between the 15 former focus countries and non- focus countries. In evaluating alignment of PEPFAR activities with national HIV/AIDS strategies, we considered PEPFAR program activities that are supportive of the achievement of national strategy goals and objectives and generally complementary of the national HIV/AIDS program to be well aligned. Our analysis involved several steps. 1. For each of the four case study countries, we reviewed the national multisectoral HIV/AIDS strategy to identify goals and objectives. We then analyzed the technical assistance narratives, which describe the ongoing and planned activities for each PEPFAR technical area, in the fiscal year 2010 COP for each of the four countries. Our analysis of the COP narratives focused on whether each objective and goal in the national strategy was fully, partially, or not addressed by activities described in the technical assistance narratives of the 2010 COP. Two of our staff independently analyzed the COP narratives to identify areas of alignment between the PEPFAR activities and the national strategy goals and objectives. 2. During our visits to the four countries, we discussed our analysis of national HIV/AIDS strategies and PEPFAR COPs with PEPFAR officials to identify reasons for identified areas of divergence between the documents. In particular, we discussed every goal and objective in the national strategy that our analysis deemed only partially or not supported by activities described in the technical assistance narratives of the COP. These conversations enabled us to identify four general reasons why the technical assistance narratives did not describe activities that fully support the particular goal or objective: (a) The goal was being supported by activities of other donors, so PEPFAR had chosen not to focus in that area. (b) The goal was generally the responsibility of the national government, or the national government was not interested in receiving PEPFAR support in that area. (c) PEPFAR policy restrictions prevented PEPFAR from supporting certain areas of the national program. (d) PEPFAR activities fully supported the goal, but owing to space limitations for COP reporting, these activities were not described in the COP or were described in a different area of the document, such as the activity descriptions. One of these four explanations by the PEPFAR team applied in each instance where we found no or partial alignment between the COP and the national strategy. We did not find any national strategy goals and objectives that were accidentally or deliberately not considered or supported by PEPFAR for reasons other than the four listed above. 3. We used our interviews with PEPFAR officials in headquarters and with other HIV/AIDS stakeholders, as well as our literature and document review, to verify and complement the results of the case study work. We conducted this performance audit from July 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cambodia Case Study GDP per capita (PPP): Life expectancy at birth: ye (rnk 177 ot of 224) HIV/AIDS adult prevalence rate: 0.% (rnk 56 ot of 170) Number of people livin with HIV/AIDS: 75,000 (rnk 54 ot of 165) Number of AIDS orphan: HIV/AIDS epidemic: HIV prevlence in Codi mong the highet in A. Codia’s HIV/AIDS epidemic i pred primrily throgh heteroexual trmission nd revolvergely rond the ex trde. A low prevlence rte in the generl poption masr higher prevlence rte in certin subpoption, such as injecting druser, people in protittion, men who hve ex with men, koke hotess, nd moile nd migrnt poption. Estimate as of 2010. Estimate as of 2007. National HIV/AIDS Program Although Cambodia is one of the poorest countries in the world, HIV prevention and control efforts exerted by the Government of Cambodia and its partners have helped to reduce the spread of HIV. Cambodia is recognized as one of the few countries that has been successful in reversing the HIV epidemic, as the adult prevalence decreased from a high of 2 percent in 1998 to 0.8 percent in 2008. The Cambodia HIV/AIDS strategy—the National Strategic Plan for a Comprehensive and Multisectoral Response to HIV/AIDS 2006-2010, developed under the leadership of the National AIDS Authority—guides the national response to the epidemic. The national strategy outlines three main goals: to reduce new infections of HIV; to provide care and support to people living with and affected by HIV; and to alleviate the socioeconomic and human impact of AIDS on the individual, family, community, and society. In addition, the multisectoral strategy also lays out seven complementary strategies to (1) increase coverage of effective prevention interventions; (2) increase coverage of effective interventions for comprehensive care; (3) increase coverage of effective interventions for impact mitigation; (4) develop effective leadership by government and nongovernment sectors for implementation of the response to AIDS at central and local levels; (5) create a supportive legal and public policy environment for the AIDS response; (6) increase the availability of information for policy makers and for program planners through monitoring, evaluation, and research; and (7) enhance sustainable and equitable resource allocation for the national response to AIDS. A large number of institutions are involved in Cambodia’s national multisectoral response to HIV and AIDS. These include ministries and other government departments, such as the Ministry of Health, Ministry of Women’s Affairs, Ministry of Rural Development, Ministry of Interior, and the National Center for HIV/AIDS, Dermatology, and STD. In addition, there are a number of other strategies and documents that support and elaborate on the national multisectoral strategy including, the Ministry of Interior HIV/AIDS strategy, Medical Laboratory Services National Strategic Plan, and the National Blood Transfusion Services of Cambodia Strategic Plan. Each of these successive plans and strategies has been supported by technical assistance and financial support from multilateral and bilateral donors, including the U.S. government. HIV/AIDS Partners and Donors In addition to the support of the U.S. government, the Cambodian HIV/AIDS program is supported by a number of other multilateral and bilateral donors. Funding from the Global Fund has comprised over 30 percent of all HIV/AIDS development assistance to Cambodia from 2004 to 2008 (see fig. 2). In addition, the Global Fund has continued to scale up its funding and programs in Cambodia in recent years, and in 2009 Global Fund contributions comprised 53 percent of HIV funding in Cambodia according to PEPFAR officials. The United Kingdom has also provided significant financial support for Cambodia’s national HIV/AIDS program for many years, contributing 13 percent of all HIV/AIDS development assistance in Cambodia from 2004 to 2008. In addition, other donors in HIV/AIDS in Cambodia include, Belgium, UNAIDS, UNICEF, the United Nations Development Programme (UNDP), Spain, Denmark, France and Germany. PEPFAR Program PEPFAR Funding The U.S. government has been working in HIV/AIDS in Cambodia for many years, even prior to PEPFAR, making the U.S. government one of the largest funders of HIV/AIDS programs in Cambodia dating back to the mid- 1990s. Thus, while Cambodia was not a PEPFAR focus country during the first phase of PEPFAR, funding in Cambodia went from $16.8 million in 2004 to $18.5 million in 2010. As noted above, in recent years, the Global Fund has emerged as the largest funder of HIV/AIDS in Cambodia. PEPFAR Program Information The PEPFAR program in Cambodia supports an array of activities for HIV/AIDS prevention, treatment, and care. For example, PEPFAR focuses on peer education activities for the most at-risk population including sex workers, men who have sex with men, drug users, and clients of sex workers. PEPFAR Cambodia also supports programs such as condom social marketing, HIV counseling and testing services, prevention of mother-to-child transmission, prevention of tuberculosis and HIV co- infection, surveillance for planning, laboratory support, and blood safety. In addition, PEPFAR funds community- and clinic-based care activities such as home care, care for orphans and vulnerable children, and pediatric AIDS. Partnership Framework Cambodia is one of several countries with smaller PEPFAR investments and programs focused largely on technical assistance that are pursuing a strategy document instead of a partnership framework. According to PEPFAR officials in Cambodia, there are currently no plans to initiate a partnership framework in Cambodia. Appendix III: Malawi Case Study GDP per capita (PPP): Life expectancy at birth: 51 ye (rnk 211 ot of 224) HIV/AIDS adult prevalence rate: 11.9% (rnk 9 ot of 170) Number of people livin with HIV/AIDS: 0,000 (rnk 15 ot of 165) Number of AIDS orphan: HIV/AIDS epidemic: The highet HIV prevlence exi mong vlnerable gro like ex worker nd their client. However, the mjority of new infection occr in cople nd mong prtner of people who hve mltiple concrrent prtner. In ddition, mother-to- child trmission itimted to ccont for lmouarter of new infection. Of the lmot 1 million people who re etimted to live with HIV in Mwi, 10 percent of them re children. Estimate as of 2010. National HIV/AIDS Program According to Malawi’s national strategy, the Malawi government program to address HIV/AIDS seeks to prevent the spread of HIV infections in Malawi, provide access to treatment for people living with HIV and mitigate the health, socio-economic and psychosocial impact of HIV and AIDS on individuals, families, communities, and the nation. Specifically, there are seven priority areas that drive the national response, which include prevention and behavior change; treatment, care, and support; impact mitigation; mainstreaming and decentralization; research, monitoring, and evaluation; resource mobilization and utilization; and policy and partnerships. The President leads the government HIV/AIDS efforts and the Department of Nutrition, HIV, and AIDS in the Office of the President and Cabinet is the lead government agency responsible for policy, oversight, and advocacy. In 2001, the government established the National AIDS Commission as a national coordinating authority to provide leadership and coordinate the national program. This commission is comprised of members from the private and public sector, civil society, and people living with HIV. A number of key ministries implement the national program, including the Ministry of Health, Ministry of Finance, and the Ministry of Economic Planning and Development. The current HIV/ AIDS national strategy for Malawi covers 2010 through 2012. While the Malawi HIV/AIDS National Action Framework is the primary HIV/AIDS strategy, other Malawi government documents also comprise the complete HIV/AIDS strategy for the country. For example, other components of the national strategy include the National HIV Prevention Strategy for 2009 through 2013, integrated annual work plans, a national monitoring and evaluation framework for 2006 to 2010, as well as other frameworks, technical strategies, and guidelines. HIV/AIDS Partners and Donors Bilateral and Multilateral Donors in HIV/AIDS Malawi’s national HIV/AIDS program receives support from a variety of bilateral and multilateral donors in addition to PEPFAR. The Global Fund is the largest donor for HIV/AIDS programs in Malawi, spending almost $190 million on HIV programs in Malawi from 2004 to 2008, which comprised almost 40 percent of all HIV development assistance over that period (see figure 5). Other major donors in the HIV/AIDS area in Malawi include the United Kingdom, Norway, and the World Bank. The Malawi government has a funding arrangement whereby each of these donors contributes to a pooled fund managed by the National AIDS Commission. Civil Society and Private Sector Civil society and private sector organizations also play a role in carrying out the national program. Civil society organizations implement activities, carry out advocacy, mobilize resources, document community practices, and support capacity-building programs. In addition, private sector organizations have the responsibility to mainstream HIV/AIDS through workplace policies and programs. PEPFAR Program PEPFAR Funding While Malawi was not one of the original 15 PEPFAR focus countries, PEPFAR maintained a presence in Malawi with funding increasing from $15 million in 2004 to $55.3 million in 2010 (see fig. 6). U.S. government development assistance for HIV/AIDS comprised 22 percent of total development assistance to Malawi for HIV/AIDS from 2004 to 2008. As noted above, the majority of the HIV/AIDS program in Malawi is funded by other donors such as the Global Fund. PEPFAR Program Information The PEPFAR program in Malawi supports interventions for HIV/AIDS prevention, treatment, and care. PEPFAR intervention strategies include strengthening care services provided by the public sector and indigenous organizations, expanding and strengthening services for orphans and vulnerable children in urban and rural areas, and building capacity to support strengthening of critical areas, including laboratory infrastructure and strategic information. According to PEPFAR officials, the Malawi PEPFAR program takes into consideration the programs and funding support provided by the other donors and focuses resources on filling gaps in the national program. Partnership Framework Malawi was the first country to complete a partnership framework, which was signed in May 2009. The framework lays out a 5-year strategic agreement between PEPFAR and the Malawi government, which focuses on reducing new HIV infections, improving the quality of treatment and care, mitigating the impacts of HIV/AIDS on individuals and households, and supporting systems needed to achieve these goals. Malawi signed a partnership framework implementation plan in July 2010 that provides additional detail including specific strategies for achieving the 5-year goals and objectives. According to PEPFAR officials in Malawi, additional funding was made available to Malawi for implementing this partnership framework. The development of the partnership framework in Malawi coincided with the update and revision of the National Action Framework. According to PEPFAR and Malawi government officials, the timing of the two processes resulted in close collaboration between government officials that increased alignment of the PEPFAR program with the national program. For example, as a result of the partnership framework development process, the PEPFAR country team was invited by the Malawi government to participate in the pooled donors meetings, even though PEPFAR does not participate in the pooled funding arrangement. Appendix IV: Uganda Case Study GDP per capita (PPP): Life expectancy at birth: ye (rnk 205 ot of 224) HIV/AIDS adult prevalence rate: 5.4% (rnk 14 ot of 170) Number of people livin with HIV/AIDS: 940,000 (rnk 14 ot of 165) Number of AIDS orphan: HIV/AIDS epidemic: Ugndce generlized HIV epidemic. There were rp decline in HIV prevlence in the mid- nd lte-1990, but in recent ye, prevlence trendve abilized. Ntionwide, HIV prevlence i higher in bareas thn in rreas. Mjor vlnerable poption gro inclde yong women, people in protittion nd militry peronnel. Estimate as of 2010. National HIV/AIDS Program According to its national HIV/AIDS strategy, Uganda aims to reduce new HIV infection by 40 percent, expand social support, and provide care and treatment services to 80 percent of needy individuals by 2012. The strategy outlines four areas: prevention, care and treatment, social support, and systems strengthening. Each area sets out specific objectives and targets. For example, under the prevention area, the strategy states that Uganda will reduce mother-to-child transmission of HIV by 50 percent by 2012. Under the systems strengthening area, the strategy includes several objectives, such as effectively coordinating and managing the response at various levels. The Uganda AIDS Commission, established in 1992, coordinates the multisectoral response to the HIV/AIDS epidemic. The National AIDS Policy has yet to be approved by the Ugandan parliament. However, in addition to Uganda’s National HIV&AIDS Strategic Plan 2007/8-2011/12, Uganda has developed national policies related to HIV counseling and testing, antiretroviral therapy, and orphans and other vulnerable children. The Ministries of Health; Gender, Labour, and Social Development; and Finance, Planning, and Economic Development, among others, are involved in the national multisectoral HIV/AIDS strategy. Coordinated by the Uganda AIDS Commission, these ministries, along with UNAIDS and other stakeholders, make up the Partnership Committee, which is in turn made up of various technical working groups and subcommittees. HIV/AIDS Partners and Donors Bilateral and Multilateral Donors Although the United States is by far the largest bilateral HIV/AIDS program donor in Uganda, the United Kingdom, Ireland, and many other countries also contribute to Uganda’s national HIV/AIDS program. In addition, the Global Fund spent over $72 million in Uganda for HIV/AIDS programs from 2004 to 2008. strategic framework. In 2007, with financial support from various development partners, the government of Uganda established a Civil Society Fund (CSF) and since has issued a number of grants to civil society organizations, including community- and faith-based organizations, and district governments to support provision of specific services by civil society groups in these areas. PEPFAR Program PEPFAR Funding Uganda was selected in 2004 as one of the original PEPFAR focus countries. As such, U.S. support for HIV/AIDS programs in Uganda increased rapidly, from about $90.8 million in 2004, to $286.3 million in 2010. As noted above, the U.S. government is the largest HIV/AIDS development partner in Uganda. PEPFAR Program Information PEPFAR-supported programs span a number of HIV program areas, including prevention, treatment, care, laboratory services, health systems strengthening, and strategic information. In collaboration with the government of Uganda, as of March 2009, PEPFAR supports antiretroviral treatment for more than 150,000 HIV-positive Ugandans. Partnership Framework The government of Uganda plans to develop new national development, health, and HIV/AIDS strategies. PEPFAR officials in Uganda indicated that these revisions create opportunities for the government of Uganda to demonstrate renewed leadership and build relationships with its development partners. In this context, PEPFAR envisions that it could pursue a Partnership Framework with Uganda. Appendix V: Vietnam Case Study ppendix V: Vietnam Case Study GDP per capita (PPP): $2,900 (rnk 165 ot of 227) Life expectancy at birth: 72 ye (rnk 12t of 224) HIV/AIDS adult prevalence rate: 0.5% (rnk 7t of 170) Number of people livin with HIV/AIDS: 290,000 (rnk 24 ot of 165) Number of AIDS orphan: HIV/AIDS epidemic: Vietnm has concentrted HIV epidemic, with the highet prevlence mong key poption t higher rik. Thee inclde injecting druser with prevlence rte of 2.6 percent, femle ex worker with prevlence rte of 4.4 percent, nd men who hve ex with men with prevlence of 9 percent in H Noi nd 5 percent in Ho Chi Minh City. Injecting druse i jor fctor driving the pred of HIV in Vietnm, poing er of complex chllenge. Estimate as of 2010. Estimate as of 2007. National HIV/AIDS Program The Vietnam national HIV strategy, the National Strategy on HIV/AIDS Prevention and Control in Vietnam until 2010 with a Vision to 2020, lays out objectives and priorities for the government response to the HIV/AIDS epidemic in Vietnam. The strategy’s goals are to control the HIV prevalence among the general population to below 0.3 percent by 2010 and with no further increase after 2010, and to reduce the adverse impacts of HIV on socio-economic development. In addition, the strategy also lays out a number of specific priority areas in the area of prevention, treatment and care, and HIV governance. In the HIV prevention area, the government program focuses on prevention and behavior change through information, education and communication, harm reduction targeting high-risk populations, prevention of mother-to-child transmission, management and treatment of sexually transmitted infections, and safe blood transfusion. The treatment and care elements of the strategy focus on care and support for people living with HIV and access to HIV treatment including antiretroviral drugs. The strategy highlights HIV governance issues including HIV surveillance, monitoring and evaluation, capacity building, and international cooperation enhancement. The government of Vietnam supports activities and services in each of these areas. The National Committee for AIDS, Drugs, and Prostitution Prevention and Control is the multisectoral body leading the government HIV program. This multisectoral body is headed by a Deputy Prime Minister, and members include vice-ministers from relevant line ministries. Technical coordination of activities is delegated to the Vietnam Administration for AIDS Control within the Ministry of Health. There are also a number of other ministries and entities involved in coordinating and implementing various aspects of the national program including, the Ministry of Public Security; the Ministry of Labor, War Invalids, and Social Affairs; the Ministry of Health; the Ministry of Education and Training; the Ministry of Finance; and the Ministry of Planning and Investment. While the current multisectoral national HIV strategy for Vietnam covers 2004 to 2010 with a vision to 2020, according to the Vietnam PEPFAR country team there are a number of other strategies, documents, and laws that guide the national program including, the Law on the Prevention and Control of HIV/AIDS and Vietnam’s Comprehensive Poverty Reduction and Growth Strategy. HIV/AIDS Partners and Donors While U.S. funding comprises the majority of HIV/AIDS development assistance funding in Vietnam, the national HIV/AIDS program receives support from a variety of other bilateral and multilateral donors as well. After PEPFAR, the United Kingdom is the largest HIV/AIDS donor in Vietnam, spending over $24 million from 2004 to 2008, which comprised 12 percent of all HIV development assistance over that period (see fig. 11). The United Kingdom HIV development assistance is focused largely in the area of HIV prevention and harm reduction. In addition, the Global Fund comprised 9 percent of all HIV development assistance from 2004 to 2008, and this funding was focused in areas including prevention of mother-to- child transmission, and HIV counseling and testing. Other major donors in Vietnam include the World Bank, which funds programs in HIV prevention, harm reduction, blood safety, and care and treatment; and Germany, which funds HIV prevention activities and procures test equipment for HIV counseling and testing services. However, according to PEPFAR officials, donor support in Vietnam is decreasing because of a number of factors, including Vietnam’s progress towards becoming a middle-income country. PEPFAR Program PEPFAR Funding During the first phase of PEPFAR, Vietnam was classified as one of the 15 PEPFAR focus countries. PEPFAR funding in Vietnam has grown from $17.7 million in 2004 to $97.8 million in 2010 (see fig. 12). In addition, U.S. funding in Vietnam comprised most HIV/AIDS development assistance to etnam comprised most HIV/AIDS development assistance to Vietnam from 2004 to 2008. Vietnam from 2004 to 2008. PEPFAR Program Information Since 2004, the PEPFAR program has provided more than $320 million to support the delivery of comprehensive HIV/AIDS prevention, care, treatment, and support activities in Vietnam. PEPFAR activities in Vietnam have included assisting Vietnam to develop comprehensive prevention, treatment, care and support networks; supporting the government of Vietnam’s efforts to reduce stigma and discrimination against people living with and affected by HIV/AIDS; training Vietnamese physicians in clinical HIV/AIDS treatment and care; assisting the Ministry of Health to develop peer outreach for at-risk populations; increasing the public health management capacity of Vietnamese government workers; assisting the Ministry of Health to develop a national HIV reference laboratory; and providing support in establishing one national surveillance and monitoring and evaluation system. According to the Vietnam PEPFAR country team, over the next 5 years, PEPFAR will place a renewed emphasis on partnering with Vietnam to build Vietnam’s national HIV/AIDS response, and continue to work together with all sectors of Vietnam as they craft strategies and programs to stop HIV/AIDS. In addition, as part of the new Global Health Initiative, PEPFAR will support Vietnam as it works to further integrate and expand access to other health care services, such as those that address tuberculosis, malaria, maternal and child health, and family planning with HIV/AIDS programs. Partnership Framework The Vietnam country team recently negotiated and signed a partnership framework with the Vietnam Administration for AIDS Control within the Ministry of Health. Development of the partnership framework implementation plan is currently under way, with completion scheduled for October 2010. Appendix VI: Comments from the U.S. Department of State, Office of the U.S. Global AIDS Coordinator Appendix VII: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Audrey Solis (Assistant Director), Todd M. Anderson, Diana Blumenfeld, Giulia Cangiano, David Dornisch, Lorraine Ettaro, Etana Finkler, Reid Lowe, Grace Lui, and Mark Needham made key contributions to this report. Additional technical assistance was provided by Chad Davenport, Marissa Jones, Bruce Kutnick, Mae Liles, Ellery Scott, and Michael Simon. Related GAO Products President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: June 2004. Global Health: Global Fund to Fight AIDS, TB, and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 2003.
The President's Emergency Plan for AIDS Relief (PEPFAR), reauthorized at $48 billion for fiscal years 2009 through 2013, supports HIV/AIDS prevention, treatment, and care services overseas. The reauthorizing legislation, as well as other key documents and PEPFAR guidance, endorses the alignment of PEPFAR activities with partner country HIV/AIDS strategies and the promotion of partner country ownership of U.S.-supported HIV/AIDS programs. This report, responding to a legislative directive, (1) examines alignment of PEPFAR programs with partner countries' HIV/AIDS strategies and (2) describes several challenges related to alignment or promotion of country ownership. GAO analyzed PEPFAR planning documents and national strategies for four countries--Cambodia, Malawi, Uganda, and Vietnam--selected to represent factors such as diversity of funding levels and geographic location. GAO also reviewed documents and reports by the U.S. government, research institutions, and international organizations and interviewed PEPFAR officials and other stakeholders in headquarters and the four countries. PEPFAR activities are generally aligned with partner countries' national HIV/AIDS strategies. GAO's analysis of PEPFAR planning documents and national HIV/AIDS strategies, as well as discussions with PEPFAR officials in the four countries GAO visited, showed overall alignment between PEPFAR activities and the national strategy goals. In addition, statements by global and country-level PEPFAR stakeholders indicate that PEPFAR activities support the achievement of partner countries' national strategy goals. PEPFAR officials noted that a number of factors may influence the degree to which PEPFAR activities align with national strategy goals, including the activities of other donors, the size of the PEPFAR program, and policy restrictions. PEPFAR may also support activities not mentioned in the national HIV/AIDS strategies but that are addressed in relevant sector- or program-specific strategies. PEPFAR officials reported various efforts to help ensure that PEPFAR activities support the achievement of national strategy goals, including assisting in developing national strategies, participating in formal and informal communication and coordination meetings, engaging regularly with partner country governments during the annual planning process, and developing a new HIV/AIDS agreement, known as a partnership framework, between PEPFAR and partner country governments. PEPFAR stakeholders highlighted several challenges related to aligning PEPFAR programs with national HIV/AIDS strategies or promoting country ownership of U.S.-supported HIV/AIDS programs. First, PEPFAR indicators, including indicator definitions and timeframes, sometimes differ from those used by partner countries and other international donors. Second, gaps may exist in the sharing of PEPFAR information with partner country governments and other donors. Third, limitations in country leadership and capacity, such as lack of technical expertise to develop strategies and manage programs, affect country teams' ability to ensure that PEPFAR activities support achievement of national strategy goals. Fourth, Office of the U.S. Global AIDS Coordinator (OGAC) guidance to country teams regarding development of partnership frameworks does not include indicators for establishing baseline measures of country ownership prior to implementation of partnership frameworks. Without baseline measures, country teams may have limited ability to measure the frameworks' impact and make needed adjustments.
Background Medicare’s home health care benefit enables certain beneficiaries with post-acute-care needs (such as recovery from joint replacement) and chronic conditions (such as congestive heart failure) to receive care in their homes rather than in other settings. To qualify for home health care, a beneficiary must be confined to his or her residence (“homebound”);require intermittent skilled nursing, physical therapy, or speech therapy; be under the care of a physician; and have the services furnished under a plan of care prescribed and periodically reviewed by a physician. If these conditions are met, Medicare will pay for part-time or intermittentskilled nursing; physical, occupational, and speech therapy; medical social services; and home health aide visits.The benefit allows for an unlimited number of visits, provided the coverage criteria are met. Beneficiaries are not liable for any coinsurance or deductible. Changes in the Benefit Have Led to Growth in Home Health Utilization Between 1990 and 1997, Medicare home health payments grew annually at a rate of more than three times that of spending growth for the entire Medicare program. This increase was due primarily to a steady rise in the proportion of beneficiaries receiving home health care and in the number of visits per person served. The number of home health users per 1,000 beneficiaries increased from 57 to 109, and the average number of visits per user doubled from 36 to 73 during this period. An increase in payments per visit accounted for only a small share of the overall growth. Originally, Medicare imposed annual limits on the number of home health care visits covered for each beneficiary. The limitation on visits was removed by the Omnibus Reconciliation Act of 1980,but utilization did not increase appreciably because of HCFA’s stringent interpretation of the coverage and eligibility criteria. A court case challenged HCFA’s interpretation, and the decision resulted in broadened coverage guidelines for home health care, allowing more beneficiaries to qualify for more visits.The benefit then was transformed from one focused on patients needing short-term care after a hospitalization to one that also serves patients with chronic conditions needing longer-term care. At the same time that much of this growth occurred, program controls were essentially nonexistent. Few claims were subject to medical review, and virtually all were paid. In 1986 and 1987, over 60 percent of home health claims were reviewed, but by 1995, claims reviewed had declined to about 1 percent. As a result, utilization after 1987 is increasingly likely to reflect a degree of inappropriate service use. Our prior investigations found a pattern of payments for “questionable or improper” services.More recently, the Department of Health and Human Services (HHS) Inspector General also documented that some of the care provided lacked supporting documentation required to determine medical necessity. Growth Encompassed a Wide Range in Service Use Historically, most home health users received few visits, and a small proportion of longer-term users received the majority of Medicare-funded visits. According to the Medicare Payment Advisory Commission (MedPAC), 51 percent of home health care recipients received fewer than 30 visits and accounted for 9 percent of all home health visits in 1996. By contrast, 15 percent of users had 150 visits or more, accounting for 59 percent of all Medicare home health visits that year. Approximately one- third of the beneficiaries in this latter group received over 300 visits.In addition, short-term patients appeared to use a different mix of visits than did longer-term patients. MedPAC reported that in 1996 only 6 percent of all visits provided to short-term users—those who received nine or fewer visits—were for aide services; skilled nursing care comprised over 75 percent of their total visits. By contrast, about 56 percent of the visits for beneficiaries who had 100 visits or more were for home health aide services. There also was marked variation in home health use across geographic areas. For example, Medicare home health users in Maryland received an average of 37 visits in 1997, with an average payment per user of $3,088. In that same year, users in Louisiana received an average of 161 visits each, with an average Medicare payment per user of $9,278. This wide variation in use persisted even after controlling for patient diagnosis. Patterns of care also differed across agency ownership and type.For-profit HHAs tended to deliver more visits per beneficiary than other types of HHAs and to provide more aide visits. For example, in 1993, for-profit HHAs provided an average of 69 home health aide visits per beneficiary, compared with 43 and 48 visits from voluntary and government HHAs, respectively.Such variation could be due to a variety of factors, including provider responses to financial incentives, differences in patient needs, regional practice patterns, and states’ varying Medicaid coverage and eligibility policies. Assessing whether the variation in service provision has been appropriate is difficult. Because no agreed-upon standards exist for what constitutes necessary or appropriate home health care, it is not clear when home health care is warranted, how many services should be provided, or when services should be discontinued. Many home health users have chronic and multiple needs, so the care for a particular condition may overlap with care for another. Furthermore, even the most basic unit of service—the visit—is not specifically defined. Home Health Anti-Fraud Measures Implemented Beginning in 1995, several regulatory policies were initiated to reduce fraud and abuse within the home health industry, which could have affected home health use and spending. Operation Restore Trust, launched in 1995, employed a number of approaches to uncovering fraud, including the use of interdisciplinary teams to review individual HHAs that billed Medicare for unusually large numbers of services. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) also contained measures to control fraud and abuse by HHAs. For example, it stipulated that any physician who falsely certifies a patient as eligible for home health services is liable for a civil monetary penalty. HIPAA also provided more funding for claims review and other safeguard activities by Medicare’s claims processing contractors. However, the proportion of claims reviewed did not increase substantially. In January 1998, HCFA announced plans to increase the number of claims reviewed to about 1.3 percent, far short of the peak levels in the mid-1980s. As part of the changes included in the BBA, coverage was eliminated for persons whose only skilled service need was venipuncture (the drawing of blood). The BBA Changed the Medicare Payment Method to Control Spending Before the BBA, HHAs were paid on the basis of their costs, up to preestablished per-visit limits. In 1996, these limits ranged from $46 for home health aide visits to $91 for skilled nursing visits, to a high of $130 for medical social services.While payments varied by the type of visit, there was no definition of what actually constituted a home health visit, such as the time spent with the patient or the services provided. There were no incentives to control the volume of services delivered, and as a result, HHAs could enhance their revenues by providing more beneficiaries with more visits. The BBA mandated substantial changes to Medicare’s method of paying for home health services. Beginning October 1, 1997, HHAs were paid under an interim payment system (IPS), which incorporated tighter per-visit cost limits than previously in place and subjected each agency to an annual Medicare revenue cap, which is the product of a per-beneficiary amount and the number of patients it served. The per-beneficiary amount is a blend of each agency’s historical average payments for treating a Medicare beneficiary and a regional or national average amount.To ensure that Medicare payments under the IPS cover its costs, an HHA needs to keep the average cost of its visits below the per-visit limits and keep its average cost per Medicare beneficiary below its per-beneficiary amount. For agencies with previously higher per-visit costs or that provided more visits per user, adjustments to the IPS may involve delivering visits more efficiently, changing the mix or reducing the number of visits provided to each user, increasing the proportion of lower-cost patients it treats, or some combination of these strategies. Beginning in October 2000, HHAs will be paid under the PPS. An agency will receive a single payment for each 60-day episode of care for a Medicare beneficiary, regardless of the services actually delivered during the period.There is no limit on the number of episodes a beneficiary may receive. A base payment will be adjusted to reflect patient characteristics that have been shown to affect service use. Payments for patients expected to use the most services in an episode will be over 5 times the payment for patients expected to use the fewest services. Each episode payment also will be adjusted for differences in labor costs across geographic areas. HCFA will make outlier payments for certain extremely high cost episodes. The BBA required HCFA to set payment levels so that Medicare home health expenditures would be equivalent to what would have been spent under the IPS, with those limits reduced by 15 percent. This 15-percent reduction has been delayed until October 1, 2001, and the Secretary of Health and Human Services must report to the Congress within 6 months of implementation of the PPS on the need for the 15-percent or other reduction. In previous work on the home health PPS, we noted several concerns about HCFA’s proposed, and now final, design.Given the wide variation in service use, the 60-day unit of payment may not be suitable for all patients. Furthermore, the adjustments to the episode payment may not adequately account for differences in patient needs and, because the adjustments rely heavily on what services are provided to patients, they may be open to manipulation by agencies. Because of uncertainties about the effects of the PPS on beneficiaries, agencies, and the program, we recommended that a risk-sharing arrangement, which limits the losses and gains a provider can experience over a period of time, be added to the PPS.HCFA did not agree with this recommendation, indicating that risk sharing was not needed, given the adjustments included in the PPS, and that risk sharing would make the PPS difficult to implement. While we are sympathetic to HCFA’s concerns and do not believe that the PPS should be delayed in order to implement risk sharing, we nevertheless remain convinced that the magnitude of potential excessive payments to some HHAs and large losses for others warrants this added complexity. We also recommended, and HCFA concurred, that the PPS be modified as appropriate as experience is gained under the PPS. To address concerns about the appropriateness of potential service reductions within episodes and whether each episode of care a beneficiary receives is medically necessary, we recommended that adequate resources be devoted to utilization monitoring and medical review. In agreeing with this recommendation, HCFA outlined the various activities it has planned to ensure that the data agencies submit are accurate, that its payments to agencies are appropriate, and that timely utilization data is readily available for possible PPS refinements. Declines in Users and Visits May Reflect Overreaction to IPS Since peaking in 1997, Medicare home health expenditures have declined rapidly so that by 1999 spending was about the same as it was in 1993. The drop in spending reflected a decrease in home health service use, both in the number of beneficiaries using home health care and the number of visits provided to each user. The patterns of decline have resulted in a benefit that involves a larger proportion of skilled services (skilled nursing and therapies) and considerably fewer home health aide services. The fall in visits per user is consistent with the objectives of the IPS but exceeds the reduction necessary for some agencies to stay within the limits of the IPS. The reduction in the number of home health users may be due in part to initiatives to combat fraud and abuse and to some agencies’ overreaction to the IPS, which may have led them to avoid certain types of high-cost patients. Fewer Home Health Users Received Fewer and a Different Mix of Services in 1999 After having been a major driver in home health spending growth from the early 1980s through 1997, the number of FFS beneficiaries receiving home health visits has decreased. The percentage of FFS beneficiaries getting home health care fell 22 percent between 1996 and 1999. In 1996, more than 100 of every 1,000 FFS beneficiaries received home health care, compared with 80 in 1999 (see fig. 1). This decline, which followed a 15-percent increase in home health users between 1994 and 1996, brought the number of users in 1999 to below 1994 levels. The number of visits per home health user also dropped substantially over this period. In 1999, the average home health user received 41 visits, compared with 73 visits in 1996 (see table 1). The average number of visits per user decreased for all visit types, although the amount of the decline varied significantly. The most notable drop was in home health aide use. In 1999, home health aide users received, on average, about half the number of home health aide visits that they received in 1996, 37 compared with 73 visits. Users of skilled nursing services in 1999 received almost one-third fewer skilled nursing visits than they did in 1996. Reductions in therapy visits were more modest than home health aide or overall average declines. Because of the disproportionate reduction in aide visits and overall drop in use, post-acute-care services are becoming a more important component of the Medicare home health benefit. Compared with previous years, the average user in 1999 is more likely to receive therapy services, and less likely to receive home health aide services. Nearly one-half of all home health users received physical therapy visits in 1999, up more than 20 percent over 1996. By contrast, 38 percent of users received home health aide services in 1999, which is 22 percent below 1996 levels. Furthermore, aide visits in 1999 comprised a smaller share of all visits (34 percent), which is similar to the share of aide visits in 1987 (see fig. 2). Skilled nursing visits have become a larger share of all visits, comprising nearly half of total visits in 1999, and therapy services have increased their proportion of total visits as well. Combined, skilled services made up two- thirds of all visits in 1999, compared with half of all visits in prior years. These shifts are consistent with care that reflects more short-term, post- acute use rather than care for longer-term chronic conditions. Decline in Utilization Reflects Payment Policy as Well as Other Changes The reduction in the visits per user is consistent with agency incentives under the IPS to keep average per-user costs below the per-beneficiary amount, yet it appears that some HHAs may have overreacted to the IPS. Some agencies reduced the number of visits provided to beneficiaries. In addition, some agencies modified their admitting practices to lower the number of beneficiaries likely to need longer-term and more costly services. Our previous work found that HHAs said they had increased their efforts to identify the anticipated service needs of prospective patients; were more reluctant to accept longer-term, expensive patients; and stepped up their monitoring of patients’ needs for timely discharge.These results are consistent with a MedPAC-sponsored survey in which some HHAs reported that because of the IPS, they were no longer taking Medicare patients they previously would have admitted.The types of patients HHAs were most likely to report they no longer admitted or discharged sooner included longer-term, chronic, and diabetic patients, all of whom are generally associated with longer-term utilization and heavy use of aide services. Some agencies responded to the IPS by reducing per-beneficiary costs more than would have been necessary to remain under the per-beneficiary amounts. The per-beneficiary amounts, which were based on 1994 cost data and updated annually, essentially used service levels in that year as the standard. However, home health service use in 1999 dipped below 1994 levels. The average home health user received 41 visits in 1999, compared with 65 visits in 1994. Moreover, the IPS had no limitations on the number of beneficiaries that an agency could serve and be paid by Medicare. Yet, the proportion of FFS beneficiaries receiving home health services in 1999 was 10 percent lower than in 1994. Other policy initiatives, such as Operation Restore Trust, which increased scrutiny of claims, and stronger physician certification requirements, may have prompted HHAs to be more vigilant in their admissions and discharge processes. In our previous work on agency closures, we found that the caseload of agencies that had stopped serving Medicare beneficiaries included patients who were ineligible for Medicare home health care.In a study of four states, HHS’ Office of Inspector General found that improper or highly questionable home health services dropped from 40 percent of the total in 1995 to 19 percent of services in 1998.In the MedPAC survey, 77 percent of agencies reported an increased reluctance on the part of physicians to refer Medicare patients for services. HHAs told us that a drop in physician referrals and the elimination of venipuncture as a qualifying service for home health care reduced agency caseloads. Beneficiaries, Providers, and Areas With Highest Service Use Experienced Largest Declines The historically wide variation in home health service use across beneficiaries, types of providers, and geographic areas has narrowed substantially because of disproportionate declines in utilization among the highest users in these categories. The number of longer-term beneficiaries receiving 150 or more home health visits per year dropped by two-thirds, compared with a 22-percent reduction in all users. High-visit HHAs accounted for a disproportionate share of the overall utilization decline after 1996, as well as a greater share of the increase before 1996. Among HHAs that historically delivered the most services, the average number of visits per user decreased more than for all agencies. And the states with the highest utilization experienced greater declines after 1996 compared with the rest of the country, although wide variation in use persists. While rural areas experienced greater reductions compared to urban areas in the proportion of beneficiaries using services, rural users continue to receive more visits. Larger Declines Among High-Use Patients Shift Benefit Toward Short-Term Use Long-term home health users, those receiving 150 or more home health visits per year, declined dramatically between 1996 and 1999, both in absolute numbers and as a proportion of all home health users. After substantial increases, the number of high-use beneficiaries per 1,000 FFS enrollees dropped 67 percent from 1996 to 1999, three times the decline among all users (see table 2). As a result, high-use beneficiaries as a proportion of total users fell by half over this period (see fig. 3). Conversely, the number of beneficiaries receiving fewer than 10 visits increased, and their share of all home health users rose from 22 to 31 percent. High-Visit HHAs Experienced Steeper Declines in Utilization The difference in utilization across HHAs has declined since 1996, but substantial variation continues. High-visit HHAs, the 20 percent of HHAs with the highest average number of visits per user in 1996, experienced greater early increases, followed by larger declines than other agencies. In 1996, these HHAs provided an average of 151 visits per user, a 30-percent increase over 1994 levels, but this fell by over half to 67 visits in 1999 (see table 3). By contrast, historically low-visit HHAs continued to reduce service provision between 1996 and 1999 by 15 percent. Given their steeper rate of decline, high-visit HHAs accounted for a disproportionate share of the total drop in visits, even after controlling for the mix of HHAs participating in the Medicare program.Among HHAs serving Medicare beneficiaries in 1994, 1996, and 1999, over one-third of the recent reduction in visits was attributable to high-visit HHAs. We found similar patterns of changes in utilization across agency ownership categories (see fig. 4). Between 1994 and 1996, for-profit HHAs increased their service provision more than other agencies, and then reduced visits by almost half between 1996 and 1999. Despite Larger Declines in High-Use States, Wide Variation Among States Persists The wide range of utilization among HHAs is likewise seen across states, as are the substantial changes in use over time. The difference in visits per user between the highest- and lowest-utilization states has increased since 1994 (see app. II). In 1999, there was over a fourfold difference in average visits per user between the lowest-utilization state (Oregon) and the highest (Louisiana) (see fig. 5), and over a threefold difference among states in the number of home health users per 1,000 FFS Medicare beneficiaries (see app. III). Although the range in utilization remains large across states, there were fewer states with extremely high use levels in 1999 than there were in 1996. From 1994 to 1996, utilization in the eight states with the highest usage rates in 1996 grew at double the rate of other states.By 1999, visits per user in these states had fallen by 47 percent from 1996 levels, compared with a 39-percent decrease for the rest of the country. These same eight states also had a greater reduction in the number of users per 1,000 FFS Medicare beneficiaries (33 percent, compared with a 19-percent reduction for the rest of the United States). Fewer Rural Home Health Users, but Visits per Patient Remained Higher Than for Urban Users In 1999, 75 out of 1,000 Medicare beneficiaries in rural areas received home health services, compared with 82 beneficiaries per 1,000 who lived in urban areas.The number of home health users in rural areas declined more than in urban areas between 1996 and 1999, although the number of visits rural users received remained higher (see table 4). Rural beneficiaries on average received 15 percent more visits than their urban counterparts, primarily because of more home health aide visits. HHA Response to PPS May Cause Some Providers to be Overpaid and Increase Program Spending In the past, home health service provision has fluctuated in response to changes in Medicare’s payment and coverage policies. The PPS incorporates new incentives, and agencies are likely to respond by modifying how they care for Medicare beneficiaries in both the services provided within an episode of care and the number of episodes provided to each patient. HHA behavior could result in substantial overpayments relative to the level of services actually delivered and huge increases in Medicare home health spending. Adequate controls are necessary to mitigate these risks. Previous Spending Patterns Suggest HHAs Are Likely to Respond to PPS Incentives HHAs appear to have responded to previous Medicare payment incentives by changing their patterns of service delivery (see table 5).In 1985, legislation more than doubled HCFA’s funding for home health claims review after which Medicare outlays grew only 1 percent annually through 1988. Restrictions on coverage were relaxed as a result of the Dugganv. Bowenlawsuit decision in 1989, followed by spending growth at an annual rate of 30 percent. Utilization peaked in 1997 when BBA changes were implemented. Under the IPS, agencies have faced strong financial incentives to control the average number of visits and the average cost of care delivered to their patients. Once again, Medicare policies appear to have affected the delivery of services, as spending decreased 32 percent between 1998 and 1999. The PPS, to be implemented October 1, 2000, will incorporate further major policy changes for Medicare that could have a profound effect on home health service use. Instead of per-visit limits and controls on the average costs of treating Medicare patients, HHAs will receive one payment for each 60-day episode of care, regardless of the actual services provided. Agencies will be rewarded financially for keeping their per-episode costs below the payment rate and thus will have a strong incentive to reduce the number of visits provided during an episode and to shift to a less costly mix of visits. Historical responses to policy changes suggest that agencies are likely to respond to the incentive to reduce services provided within an episode and to increase the number of episodes they deliver. PPS Payment Rates May Allow Some Agencies to Increase Visits and May Be Excessive for Others The PPS will use payment rates based on 1998 home health spending and utilization data. Although by 1998 home health care utilization had already started falling from its peak in 1997, the PPS rates will still be based on an average experience that is higher than current usage. Thus, the episode payments could present an ample cushion for many agencies. Not only could the episode amounts allow for more visits during a 60-day period than the average agency is now providing, but because there is no limit on the number of episodes an HHA may provide to a patient, agencies may revert to treating beneficiaries for longer periods. The adjustments HHAs may make to adapt to the episode-based PPS will depend on their current service patterns. Agencies that have continued to incur expenses above the national average will be pressured to lower their episode costs, which is likely to require decreasing the number of visits provided or shortening their duration. Agencies with below-average costs, probably reflecting fewer average visits for a given episode, will be rewarded financially under the PPS. Some of these agencies may increase service provision. Others, however, may choose to maintain their relatively low expenses (and probably low visit levels) or reduce services even further, thereby increasing profits. In such cases, the PPS likely will pay too much relative to the services delivered in each episode. We noted in our April 2000 report that the adjusters to the basic payment rate to reflect patients’ needs are more sensitive to differences in the amount of therapy services provided than to differences in patients’ clinical indicators.We remain concerned that the financial benefit of providing more therapy services to receive higher payments may interfere with the goal of the PPS to provide payments that support efficiently delivered care that meets patients’ needs. Incentive to Provide More Episodes May Result in Increased Program Spending Agencies can enhance their revenues by serving more longer-term users and extending the length of time they serve patients in order to be paid for additional episodes. For some patients, the scheduling of visits could determine whether an agency is paid for one episode or two. In addition, the design of the PPS allows agencies to receive a full episode payment for a small number of visits. While the episode payment is based on an average of 27 visits, agencies can receive an episode payment if they provide as few as 5 visits.As 16 percent of episodes in 1998 consisted of one to four visits, adding only a few visits would allow the agency to receive the full episode payment. The budgetary implications of growth in the number of episodes are considerable. HCFA has projected 5.3 million full episodes for 2001, almost 13 percent fewer than in 1998. Because the industry has historically responded to changes in payment policy in ways that enhanced agency revenues, this projection may not adequately anticipate potential service growth in response to the PPS’ strong incentives. If the number of episodes in 2001 exceeds HCFA’s projection by as little as 5 percent, program expenditures could be roughly half a billion dollars more than projected. Program Controls May Be Inadequate to Counter PPS Incentives HCFA has included three mechanisms under the PPS to counter the incentives to stint on services and generate additional episodes. First, HCFA will curtail gross overpayments for very low-service episodes by paying on a per-visit basis (through the low-utilization payment adjustment) when fewer than five services are provided in a 60-day period. Second, adjustments to the payment for an episode can be made if a significant change in patient condition occurs. The episode payment can be raised on a prorated basis if a patient’s condition deteriorates or if therapy service provision increases after the beginning of an episode; the payment can be decreased on a prorated basis if the home health agency reports significant improvement in a patient’s condition during the course of care that changed the required services. The third mechanism is a requirement for medical review of a portion of claims to detect underservice and unnecessary episodes. For fiscal year 2001, HCFA has targeted just over 2 percent of home health claims for review, even though provider incentives will be different than under previous payment methods. HCFA has characterized its planned utilization monitoring and medical review activities as similar to reviews conducted before the implementation of the PPS, when the payment incentives were different. It is unclear whether these controls, in combination with HCFA’s planned activities and continued anti-fraud-and-abuse activities, will be sufficient to counter incentives to provide fewer services within an episode and to generate additional episodes, especially given agencies’ historical ability to quickly respond to such incentives. Furthermore, the lack of standard definitions of appropriate home health care will confound efforts to identify instances of excessive use or inadequate care. Conclusions The fluctuations in Medicare home health use suggest that agencies will continue to respond to their payment and policy environments by changing the volume and mix of the services they provide to Medicare beneficiaries. Indeed, the PPS is based on the premise that appropriate financial incentives cause HHAs to deliver services more efficiently. Previously, we expressed concern that the wide, unexplained variation in service use and inadequate patient-level payment adjusters could result in substantial underpayments to some agencies and for some types of patients and overpayments for others under a PPS based on national average costs. After examining HHA responses to the IPS and the basis for Medicare’s PPS, we continue to believe that additional protections for beneficiaries, agencies, and the program need to be incorporated into the payment mechanism through a risk-sharing arrangement that limits the aggregate losses or gains for each agency. Risk sharing would insulate agencies from extreme financial losses, protect beneficiaries from impaired access or inadequate care, and shield Medicare from burgeoning expenditures. HCFA disagreed with our recommendation that it implement a risk-sharing mechanism in conjunction with the PPS. HCFA argued that doing so would complicate the administration of the payment system and that the mechanism was not needed because certain features of the PPS, such as the case mix adjustment mechanism and the potential for unlimited episodes, would adequately protect beneficiaries and the program. We acknowledge HCFA’s concerns that a risk-sharing arrangement adds administrative complexity to the PPS, but believe that the uncertainties about appropriate payment levels, as well as the lack of consensus regarding what constitutes adequate treatment, require this payment system modification. Further, we continue to have reservations about the adequacy of some of the features of the PPS that HCFA believes will offer protections from any unintended consequences of the new payment system. A risk-sharing arrangement would minimize excessive payments to some agencies and extreme losses for others, and it would moderate incentives to underserve beneficiaries and inappropriately change treatment patterns. Given the number of agencies and beneficiaries affected, and the potential effect on Medicare expenditures, we believe the added complexity engendered by risk sharing is warranted. As service use changes in response to the PPS, we and HCFA agree that it will be important to refine the payment system. The rates will need to be evaluated to ensure that HHAs are not overpaid relative to the services provided. The Secretary of Health and Human Services’ report, due by April 1, 2001, that will evaluate the need for a 15-percent payment reduction will be an important first step in assessing the adequacy of current payment rates. Ongoing refinements of the payment system reconsideration of the episode length, the average payment rate, and the patient-level payment adjusters will continue to be needed, to account for changes in HHA service delivery and beneficiary needs. Even as the system is improved, however, payment mechanisms alone may not be adequate to ensure appropriate service use. As we previously recommended and as was agreed to by HCFA, sufficient resources must be devoted to ensuring that any service reduction within episodes is appropriate and that each episode of care a beneficiary receives is medically necessary. Matter for Congressional Consideration Given the uncertainties for beneficiaries, HHAs, and the Medicare program associated with the home health agency PPS, we believe that the Congress should consider requiring HCFA to implement a risk-sharing arrangement under the PPS to moderate excessive HHA gains or losses as soon as practicable. We believe that a risk-sharing arrangement would offer protection to Medicare beneficiaries, home health agencies, and the Medicare program from any unintended consequences of the home health PPS. Agency Comments In commenting on a draft of this report, HCFA found our analysis useful in understanding trends in home health utilization and payment trends under the IPS. HCFA concurred that many home health agencies may have over- reacted to the IPS by curtailing service provision after 1997 more than was necessary. HCFA also agreed that refinements to the PPS will be an ongoing activity based on HHA behavior and reiterated its commitment to monitor provider responses under the new system to ensure beneficiary access to needed services. While HCFA agrees with us that risk sharing in conjunction with the PPS is one option to moderate inappropriate behavior, it continues to have reservations about implementing such a provision. HCFA also provided technical comments, which we incorporated in the final report as appropriate. HCFA raised concerns about a risk-sharing provision. First, it believes that a risk-sharing arrangement that limits HHA profits or losses through a comparison of Medicare payments with Medicare costs undermines the incentives of the PPS. HCFA said that this would encourage HHAs to increase their costs—potentially in ways unrelated to patient care—thus rewarding provider inefficiency. HCFA also said that costs are not the best measure of whether patients’ service needs are being met. Further, it is concerned that relying on costs in the payment system perpetuates the need for an elaborate cost settlement reconciliation system. Because of these concerns, HCFA prefers a visit-based measure of utilization to correct inappropriate behavior. HCFA is also concerned that HHAs need time to adapt to the new payment system and therefore that it would be premature to immediately implement risk sharing before HHA responses to the PPS can be evaluated and before PPS adjustments, if any, are made on the basis of observed behavior. Further, HCFA believes that HHAs compete for patients on the basis of service delivery and that competition among HHAs will be a primary driver of agency behavior and performance under the PPS. We agree with HCFA that a visit-based approach to moderating inappropriate behavior would improve the current PPS, but we continue to believe that a risk-sharing arrangement based on a comparison of Medicare payments and costs is preferable. First, it offers HHAs more flexibility than a visit-based approach with respect to the services they provide under the PPS, because HHAs could balance visit costs, mix, and volume in meeting beneficiary care needs and keeping their costs in line with Medicare payments. Second, because cost-based risk sharing depends on HHA cost data, using this information in conjunction with the PPS could improve cost reporting data, which will be critical to evaluating the PPS. We acknowledge that a risk-sharing arrangement based on agency costs lessens the incentive for an HHA to cut its costs, but we believe that it could be designed in a way that would offset any incentive to maintain high costs. For example, if risk sharing always required HHAs to incur some portion of their losses, agencies would continue to have an incentive to lower their costs. Further, HCFA does not acknowledge the protection afforded by a risk-sharing approach against Medicare overpayments for episodes or Medicare expenditure growth due to increased numbers of episodes, which we believe are important justifications for this payment modification. Finally, we believe that risk sharing is an important tool in moderating the incentive HHAs can have under the PPS to stint on services and to protect Medicare patients from underservice. We believe that risk sharing should be implemented as soon as practicable because our analyses of recent and historical utilization and spending data indicate that agencies respond dramatically and quickly, but not necessarily appropriately, to changes in Medicare payment policies. In its comments, HCFA noted the rapid growth in utilization between 1990 and 1997, and agencies’ overreaction to the IPS between 1997 and 1999. Similarly, we believe that agencies may immediately respond to the incentives of the PPS in ways that may jeopardize beneficiary access to services or quality of care and increase program expenditures. We agree with HCFA that agency competition for patients and medical review and monitoring efforts may deter HHAs from underserving beneficiaries. However, we remain concerned that these features may be insufficient to counter the financial incentives to stint on services within an episode and to provide unnecessary episodes. Further, relying on competition to enforce appropriate agency behavior may place unrealistic expectations on a vulnerable population to have information about agencies’ provision of services and assumes that beneficiaries have choices in selecting a provider, which is not necessarily true for all beneficiaries, particularly those located in rural areas. Given the potential limitations of competition and medical review in guarding against potential underservice, risk sharing could provide HCFA with an additional tool to protect Medicare’s beneficiaries. HCFA’s comments are included as appendix IV. We are sending copies of this report to the Honorable Nancy-Ann Min DeParle, Administrator of HCFA, and interested congressional committees. We will also make copies available to others upon request. If you have any questions about this report, please call me or Laura Dummit, Associate Director, at (202) 512-7119. Major contributors included Carol Carter, Jean Chung, James E. Mathews, Kara Sokol, and Wayne Turowski. Scope and Methodology We conducted our analyses using Medicare provider, claims, and beneficiary files for calendar years 1994, 1996, and 1999. We chose 1994 as our starting point because its patterns of utilization and spending were used to set the interim payment system (IPS) payment limits. We analyzed 1996 data because the 1997 home health claims data include both pre-IPS and IPS claims. We did not analyze 1998 data, since HCFA had well- documented problems constructing this claims file. We thus selected 1999 claims data to reflect utilization patterns under the IPS. Agency ownership and location were extracted from HCFA’s end-of-year Provider of Services files for 1994, 1996, and 1999. We included in our analysis only those providers that were listed as active in each year. We used 100 percent of Medicare claims from HCFA’s home health Standard Analytical Files (SAF), final action claims, for 1994, 1996, and 1999 to analyze patterns and trends in home health utilization. These files were edited in three ways. First, the claims file for each year was compared with the Medicare corresponding Denominator File to exclude claims for beneficiaries who had enrolled in a Medicare managed care plan at any point in the year. Second, the HHAs included in the claims data were compared with the Provider of Service files for each year, and only claims from agencies participating in Medicare were included in our analyses. Last, we excluded aberrant values for service counts. We used a 1999 claims file that was generated in May 2000, although HCFA’s SAFs are usually not complete until June of the year following the claims year. After analyzing the distribution of claims by month, we concluded that the file was roughly 95 percent complete. Subsequent comparisons with HCFA’s projections indicated that our estimate of the number of beneficiaries receiving home health services in 1999 was 4 percent lower than HCFA’s final total. As a result, numbers presented in this report are likely to slightly understate actual utilization in 1999 and may slightly overstate the declines reported between 1996 and 1999. In 1999, HCFA implemented a policy change that affected how home health agencies reported the units of service when submitting claims for payment to Medicare. Until July 1, 1999, units represented the number of visits; starting October 1, units represented the number of 15-minute increments making up the visit; and between July 1 and September 30, both counting methods were used on the claims. We incorporated these policy changes in our calculation of units from the claims files and verified our calculations by analyzing the monthly distribution of visits during 1999. For our analysis of changes in the number of Medicare beneficiaries using home health services, we controlled for changes in Medicare enrollment by using home health users per 1,000 Medicare fee-for-service (FFS) beneficiaries. Our analysis only reflects Medicare FFS enrollees because HCFA data on service use exclude those enrolled in managed care plans and because the payment methods of interest in our analysis only apply to those receiving home health care under FFS. In analyzing geographic characteristics, we used the beneficiary’s residence, reflecting HCFA’s decision to pay agencies on the basis of where the patient resides, not where the agency is located. Because beneficiaries may receive care from multiple agencies, which could be of different types, we counted each unique beneficiary/agency combination as a separate home health user when analyzing service use by agency characteristics. As a result, the user counts included in our analyses of HHA characteristics are roughly 10 percent higher than those included in the beneficiary-level data. To examine the response of users, agencies, and areas of high utilization to policy changes, we categorized beneficiaries, HHAs, and states as low-use, medium-use, and high-use according to the average number of visits per user in 1996. The low- and high-use cutoff points for beneficiaries and agencies were set such that roughly 20 percent of the observations in 1996 fell into each category, with the remaining group defined as medium-use. High-use states were defined as those with utilization 20 percent or more above the national mean. To control for agencies opening and closing between 1994 and 1999, we created a cohort of agencies open in all 3 years and examined them separately. Their utilization trends were similar to those included in this report. Our analysis of HCFA’s proposed PPS was based on the FederalRegister final ruleand briefings with HCFA officials. Medicare Home Health Users, by State of Residence, Calendar Years 1994, 1996, and 1999 13.2 (Continued From Previous Page) Average Visits per Medicare Home Health User, by State of Residence, Calendar Years 1994, 1996, and 1999 -38.1 (Continued From Previous Page) Comments From the Health Care Financing Administration Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Pursuant to a congressional request, GAO provided information on Medicare home health care's recent declines in spending, focusing on: (1) the declines in service use underlying the changes in spending; (2) the extent of the changes in use across beneficiaries, home health agencies (HHA), and locations; and (3) identify any implications these new patterns of home health use have for the impact of the prospective payment system (PPS). GAO noted that: (1) the 48-percent reduction in Medicare home health care spending following the Balanced Budget Act (BBA) of 1997 was due to sharp declines in both the numbers of users and services used; (2) the number of Medicare beneficiaries receiving home health services fell by 22 percent; (3) during the same period, the average number of home health visits received by each user went down 44 percent; (4) changes in home health care varied across agencies and types of users as well; (5) in nearly all instances, declines were greatest for the types of agencies that had provided and the patients who had used the most services in 1996; (6) there was a similar pattern in the drop in usage across states; (7) states that had the highest levels of service use in 1996 had larger declines than states where beneficiaries received fewer service; (8) declines in rural areas were larger than in urban areas ewer; (9) the recent changes in home health utilization occurred at least in part in response to changes in Medicare's payment policies mandated by the BBA; (10) because the new PPS payment rates are based on the historically high utilization in 1998, even after adjusting for projected declines in utilization, they likely will be generous compared with current use patterns; (11) for this reason, home health agency responses to the PPS could result in overpayments relative to services provided while simultaneously raising Medicare spending; (12) under the PPS, Medicare will make a single payment for each 60-day episode of home health care; (13) the PPS will give agencies an incentive to increase the episodes of care they provide; and (14) this, in turn, could cause total Medicare home health spending to rise.
DHS Continually Reviews Potential Overstay Records, but Unmatched Arrival Records Remain DHS Reviewed a Backlog of 1.6 Million Potential Overstay Records DHS has taken action to address a backlog of potential overstay records we previously identified in April 2011. Specifically, in April 2011, we reported that, as of January 2011, ADIS contained a backlog of 1.6 million potential overstay records, which included prior nonpriority overstay leads that had not been reviewed, nonpriority leads that continued to accrue on a daily basis, and leads generated in error as a result of CBP system changes. DHS uses ADIS to match departure records to arrival records and subsequently close records for individuals with matching arrival and departure records because either (1) the individual departed prior to the end of his or her authorized period of admission and is therefore not an overstay or (2) the individual departed after the end of his or her authorized period of admission and is therefore an out-of-country overstay. Unmatched arrival records—those records in ADIS that do not have corresponding departure records—remain open and indicate that those individuals are potential in-country overstays. To determine whether an unmatched arrival record is likely to be an in-country overstay, DHS agencies review multiple databases to determine if any information is available to document a departure or a change in immigration status. For example, the review process includes both automated searches, such as searching for immigration benefit application information through a U.S. Citizenship and Immigration Services database, and manual searches, such as determining whether the individual applied for refugee or asylum status. provided them to CTCEU for further review and consideration for enforcement action. Table 1 describes how CTCEU resolved these leads. Since completing this review of the backlog of potential overstay records in the summer of 2011, DHS has continued to review all potential overstay records through national security and law enforcement databases to identify potential threats, regardless of whether the subjects of the records meet ICE’s priorities for enforcement action. This occurs on an ongoing basis such that DHS may identify threats among individuals who were not previously identified as such when new information becomes available in various national security and law enforcement databases. DHS Has More than 1 Million Unmatched Arrival Records As of April 2013, DHS continues to maintain more than 1 million unmatched arrival records in ADIS (that is, arrival records for which ADIS does not have a record of departure or status change). Some of these individuals are overstays, while others have either departed or changed immigration status without an ADIS record of their departure or status change. For example, the individual may have departed via a land port of entry without providing a record of departure or the individual may have applied for immigration benefits using a different name. In addition, these records include those from the previous backlog of unmatched arrival records that were not prioritized for enforcement in the summer of 2011 and have not subsequently been matched against a departure or change of status record. As part of our ongoing work, we are analyzing these data to identify various trends among these unmatched arrival records. For example, our preliminary analysis shows that 44 percent of the unmatched arrival records are nonimmigrants traveling to the United States on a tourist visa, while 43 percent are also tourists but were admitted under the Visa Waiver Program. Figure 1 presents our preliminary analysis of the breakdown of unmatched arrival records by admission class. We also analyzed the records to assess the amount of time that has elapsed since travelers were expected to depart the country, based on travelers’ “admit until” date. CBP assigns certain nonimmigrants an “admit until” date, by which they must leave the country to avoid overstaying. Figure 2 presents our preliminary analysis of the breakdown of the amount of time elapsed, as of November 2012, since the “admit until” date. The average amount of time elapsed for all unmatched arrival records was 2.7 years. As of April 2013, DHS has not analyzed its unmatched arrival records to identify whether there are any trends in these data that could inform the department’s overstay enforcement efforts. We will continue to evaluate these data as part of our ongoing work. DHS Has Actions Completed and Under Way to Improve Data, but the Effect of These Changes Is Not Yet Known DHS Has Begun Collecting Additional Data and Improved Sharing of Data among Its Databases to Help Identify Potential Overstays Since April 2011, DHS has taken various actions to improve its data on potential overstays. In April 2011, we reported that DHS’s efforts to identify and report on overstays were hindered by unreliable data, and we identified various challenges to DHS’s efforts to identify potential overstays, including the incomplete collection of departure data from nonimmigrants at ports of entry, particularly land ports of entry, and the lack of mechanisms for assessing the quality of leads sent to ICE field offices for investigations. Since that time, DHS has taken action to strengthen its processes for reviewing records to identify potential overstays, including (1) streamlining connections among DHS databases used to identify potential overstays and (2) collecting information from the Canadian government about those exiting the United States and entering Canada through northern land ports of entry. First, DHS has taken steps to enhance connections among its component agencies’ databases used to identify potential overstays and reduce the need for manual exchanges of data. For example: In August 2012, DHS enhanced data sharing between ADIS and IDENT. This improved connection provides additional data to ADIS to improve the matching process based on fingerprint identification. For example, when an individual provides fingerprints as part of an application for immigration benefits from U.S. Citizenship and Immigration Services or a visa from the State Department, or when apprehended by law enforcement, IDENT now sends identity information, including a fingerprint identification number, for that individual to ADIS. This additional source of data is intended to help allow ADIS to more effectively match the individual’s entry record with a change of status, thereby closing out more unmatched arrival records. Beginning in April 2013, ICE’s Student and Exchange Visitor Information System (SEVIS) began automatically sending data to ADIS on a daily basis, allowing ADIS to review SEVIS records against departure records and determine whether student visa holders who have ended their course of study departed in accordance with the terms of their stay. Prior to this date, DHS manually transferred data from SEVIS to ADIS on a weekly basis. According to DHS officials, these exchanges were unreliable because they did not consistently include all SEVIS data—particularly data on “no show” students who failed to begin their approved course of study within 30 days of being admitted into the United States. Also in April 2013, DHS automated the exchange of potential overstay records between ADIS and CBP’s Automated Targeting System (ATS), which is intended to allow DHS to more efficiently (1) transfer data between the systems for the purpose of identifying national security and public safety concerns, and (2) use matching algorithms in ATS that differ from those in ADIS to close additional records for individuals who departed. These changes have resulted in efficiencies in reviewing records for determining possible overstay leads; however, they do not address some of the underlying data quality issues we previously identified, such as incomplete data on departures through land ports of entry. Furthermore, because many of these changes were implemented in April 2013, it is too early to assess their effect on the quality of DHS’s overstay data. Second, DHS is implementing the Beyond the Border initiative to collect additional data to strengthen the identification of potential overstays. In October 2012, DHS and the Canada Border Services Agency began exchanging entry data on travelers crossing the border at selected land ports of entry. Because an entry into Canada constitutes a departure from the United States, DHS will be able to use Canadian entry data as proxies for U.S. departure records. We have previously reported that DHS faces challenges in its ability to identify overstays because of unreliable collection of departure data at land ports of entry. This effort would help address that challenge by providing a new source of data on travelers departing the United States at land ports on the northern border. In the pilot phase, DHS exchanged data with the Canada Border Services Agency on third-country nationals at four of the five largest ports of entry on the northern border. These data covered entries from September 30, 2012, through January 15, 2013. DHS plans to expand this effort to collect data from additional ports of entry and to share data on additional types of travelers. According to DHS officials, after June 30, 2013, DHS plans to exchange data for third-country nationals at all automated ports of entry along the northern border. At that time, DHS also plans to begin using these data for operational purposes (e.g., taking enforcement action against overstays, such as revoking visas or imposing bars on readmission to the country based on the length of time they remained in the country unlawfully). After June 30, 2014, DHS plans to exchange data on all travelers, including U.S. and Canadian citizens, at all automated ports of entry along the northern border. DHS Continues to Face Challenges in Reporting Reliable Overstay Rates, and Recent Changes Have Not Yet Been Fully Implemented DHS has not reported overstay rates because of concerns about the reliability of its data on overstays. According to federal law, DHS is to submit an annual report to Congress providing numerical estimates of the number of aliens from each country in each nonimmigrant classification who overstayed an authorized period of admission that expired during the fiscal year prior to the year for which the report is made. Since 1994, DHS or its predecessors have not reported annual overstay rates regularly because of its concerns about the reliability of the department’s overstay data. In September 2008, we reported on limitations in overstay data, such as missing data for land departures, that affect the reliability of overstay rates. In April 2011, we reported that DHS officials stated that the department had not reported overstay rates because it had not had sufficient confidence in the quality of its overstay data. DHS officials stated at the time that, as a result, the department could not reliably report overstay estimates in accordance with the statute. Although the new departure data DHS is collecting as part of the Beyond the Border initiative may allow DHS to close out more potential overstay records in the future, these data are limited to land departure at northern border ports of entry, and as the initiative has not yet been fully implemented, it is too early to assess its effect on helping strengthen the reliability of DHS’s overstay data for reporting purposes. In February 2013, the Secretary of Homeland Security testified that DHS plans to report overstay rates by December 2013. As of April 2013, DHS was working to determine how it plans to calculate and report these overstay rates. As part of our ongoing review, we are assessing how the changes DHS has made to its processes for matching records to identify potential overstays may affect the reliability of overstay data and DHS’s ability to report reliable overstay rates. DHS Faces Challenges Planning for a Biometric Exit System at Air and Sea Ports of Entry Developing a biometric exit capability has been a long-standing challenge for DHS. Beginning in 1996, federal law has required the implementation of an integrated entry and exit data system for foreign nationals. The Intelligence Reform and Terrorism Prevention Act of 2004 required the Secretary of Homeland Security to develop a plan to accelerate full implementation of an automated biometric entry and exit data system that matches available information provided by foreign nationals upon their arrival in and departure from the United States. Since 2004, we have issued a number of reports on DHS’s efforts to implement a biometric entry and exit system. For example, in November 2009, we reported that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work that needed to be accomplished to deliver a comprehensive exit solution. We concluded that without a master schedule that was integrated and derived in accordance with relevant guidance, DHS could not reliably commit to when and how it would deliver a comprehensive exit solution or adequately monitor and manage its progress toward this end. We have made recommendations to address these issues, including that DHS ensure that an integrated master schedule be developed and maintained. DHS has generally concurred with our recommendations and has reported taking action to address them. For example, in March 2012, DHS reported that the US-VISIT office was adopting procedures to comply with the nine scheduling practices we recommended in our November 2009 report and has conducted training on our scheduling methodology. DHS has not yet implemented a biometric exit capability, but has planning efforts under way to assess options for such a capability at airports and seaports. In 2009, DHS conducted pilots for biometric exit capabilities in airport scenarios, as called for in the Consolidated Security, Disaster Assistance, and Continuing Appropriations Act, 2009. In August 2010, we reported on the results of our review of DHS’s evaluation of these pilot programs. Specifically, we reported that there were limitations with the pilot programs—for example, the pilot programs did not operationally test about 30 percent of the air exit requirements identified in the evaluation plan for the pilot programs—which hindered DHS’s ability to inform decision making for a long-term air exit solution and pointed to the need for additional sources of information on air exit’s operational impacts. According to DHS officials, the department’s approach to planning for biometric air exit has been partly in response to our recommendation that DHS identify additional sources for the operational impacts of air exit not addressed in the pilot programs’ evaluation and to incorporate these sources into its air exit decision making and planning. As of April 2013, the department’s planning efforts are focused on developing a biometric exit system for airports, with the potential for a similar solution to be rolled out at seaports, according to DHS officials. However, in October 2010, DHS identified three primary reasons why it has been unable to determine how and when to implement a biometric air exit solution: (1) the methods of collecting biometric data could disrupt the flow of travelers through air terminals; (2) air carriers and airport authorities had not allowed DHS to examine mechanisms through which DHS could incorporate biometric data collection into passenger processing at the departure gate; and (3) challenges existed in capturing biometric data at the point of departure, including determining what personnel should be responsible for the capture of biometric information at airports. According to DHS officials, these challenges have affected the department’s planning efforts. In 2011, DHS directed its Science and Technology Directorate (S&T), in coordination with other DHS component agencies, to research “long-term options” for biometric exit. In May 2012, DHS reported internally on the results of S&T’s analysis of previous air exit pilot programs and assessment of available technologies, and the report made recommendations to support the planning and development of a biometric air exit capability. In that report, DHS concluded that the building blocks to implement an effective biometric air exit system were available. However, DHS reported that significant questions remained regarding (1) the effectiveness of current biographic air exit processes and the error rates in collecting or matching data, (2) methods of cost- effectively integrating biometrics into the air departure processes (e.g., matching arrival and departure records based on biometric information like fingerprints rather than based on biographic information, such as names and dates of birth), (3) the additional value biometric air exit would provide compared with the current biographic air exit process, and (4) the overall value and cost of a biometric air exit capability. The report included nine recommendations to help inform DHS’s planning for biometric air exit, such as directing DHS to develop explicit goals and objectives for biometric air exit and an evaluation framework that would, among other things, assess the value of collecting biometric data in addition to biographic data and determine whether biometric air exit is economically justified. DHS reported that, by May 2014, it planned to take steps to address the recommendations in its report; however, according to DHS Office of Policy and S&T officials, the department has not yet completed actions in response to these recommendations, although DHS officials reported that DHS has plans to do so to help support development of a biometric air exit concept of operations. For example, DHS’s report recommended that DHS develop explicit goals and objectives for biometric air exit and use scenario-based testing rather than operational pilot programs to inform the concept of operations for biometric air exit. As of April 2013, DHS officials stated that they expect to finalize goals and objectives in the near future and are making plans for future scenario-based testing. In addition, DHS’s report stated that new traveler facilitation tools and technologies— for example, online check-in, self-service, and paperless technology— could support more cost-effective ways to screen travelers, and that these improvements should be leveraged when developing plans for biometric air exit. However, DHS officials stated that there may be challenges to leveraging new technologies to the extent that U.S. airports and airlines rely on older, proprietary systems that may be difficult to update to incorporate new technologies. Furthermore, DHS officials stated they face challenges in coordinating with airlines and airports, which have expressed significant reluctance about biometric exit because of concerns over its effect on operations and potential costs. To address these concerns, DHS is conducting outreach and soliciting information from airlines and airports regarding their operations. DHS officials stated that the goal of its current efforts is to develop information about options for biometric exit and to report to Congress in time for the fiscal year 2016 budget cycle regarding (1) the additional benefits that biometric exit provides beyond enhanced biographic exit and (2) costs associated with biometric exit. As part of our ongoing work, we are assessing DHS’s progress in meeting its goals for addressing the recommendations in its biometric exit report by May 2014. We plan to report on the results of our analysis in July 2013. Chairman Miller, Ranking Member Jackson Lee, and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For information about this statement please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions included Kathryn Bernet, Assistant Director; Susan Baker; Frances A. Cook; Alana Finley; Lara Miklozek; Amanda Miller; and Ashley D. Vaughan.
Each year, millions of visitors come to the United States legally on a temporary basis either with or without a visa. Overstays are individuals who were admitted into the country legally on a temporary basis but then overstayed their authorized periods of admission. DHS has primary responsibility for identifying and taking enforcement action to address overstays. Within DHS, U.S. Customs and Border Protection is tasked with inspecting all people applying for entry to the United States. U.S. Immigration and Customs Enforcement is responsible for enforcing immigration law in the interior of the United States. In April 2011, GAO reported on DHS's actions to identify and address overstays and made recommendations to strengthen these processes. DHS concurred and has taken or is taking steps to address them. Since April 2011, DHS has reported taking further actions to strengthen its processes for addressing overstays. This testimony discusses GAO's preliminary observations on DHS's efforts since April 2011 to (1) review potential overstay records for national security and public safety concerns, (2) improve data on potential overstays and report overstay rates, and (3) plan for a biometric exit system. This statement is based on preliminary analyses from GAO's ongoing review of overstay enforcement for this subcommittee and other congressional requesters. GAO analyzed DHS documents and data related to overstays and interviewed relevant DHS officials. GAO expects to issue a final report on this work in July 2013. DHS provided technical comments, which were incorporated as appropriate. Since GAO reported on overstays in April 2011, the Department of Homeland Security (DHS) has taken action to address a backlog of potential overstay records by reviewing such records to identify national security and public safety threats, but unmatched arrival records remain in DHS's system. In April 2011, GAO reported that, as of January 2011, DHS's Arrival and Departure Information System (ADIS) contained a backlog of 1.6 million potential overstay records. DHS uses ADIS to match departure records to arrival records and subsequently close records for individuals with matching arrival and departure records. Unmatched arrival records--those that do not have corresponding departure records--remain open and indicate that the individual is a potential overstay. In the summer of 2011, DHS reviewed the 1.6 million potential overstay records. As a result, DHS closed about 863,000 records and removed them from the backlog. Since that time, DHS has continued to review all potential overstay records for national security and public safety concerns. However, as of April 2013, DHS continues to maintain more than 1 million unmatched arrival records in ADIS. GAO's preliminary analysis identified nonimmigrants traveling to the United States on a tourist visa constitute 44 percent of unmatched arrival records, while tourists admitted under a visa waiver constitute 43 percent. The remaining records include various types of other nonimmigrants, such as those traveling on temporary worker visas. DHS has actions completed and under way to improve data on potential overstays and report overstay rates, but the impact of these changes is not yet known. DHS has streamlined connections among databases used to identify potential overstays, among other things. Although these actions have resulted in efficiencies in processing data, they do not address underlying data quality issues, such as missing land departure data. Further, because many of these changes were implemented in April 2013, it is too early to assess their effect on the quality of DHS's overstay data. DHS continues to face challenges in reporting reliable overstay rates. Federal law requires DHS to report overstay estimates, but DHS or its predecessors have not regularly done so since 1994. In September 2008, GAO reported on limitations in overstay data that affect the reliability of overstay rates. In April 2011, GAO reported that DHS officials said that they have not reported overstay rates because DHS has not had sufficient confidence in the quality of its overstay data and that, as a result, DHS could not reliably report overstay rates. In February 2013, the Secretary of Homeland Security testified that DHS plans to report overstay rates by December 2013. DHS faces challenges planning for a biometric exit system at air and sea ports of entry. Beginning in 1996, federal law has required the implementation of an integrated entry and exit data system for foreign nationals. As of April 2013, DHS's planning efforts are focused on developing a biometric exit system for airports, with the potential for a similar solution at sea ports. However, in October 2010, DHS identified key challenges as to why it has been unable to determine how and when to implement a biometric air exit capability, including challenges in determining what personnel should be responsible for the capture of biometric information. GAO is assessing DHS's plans and efforts in these areas and plans to report on its results in July 2013.
Background The special operations forces’ ASDS is a battery-powered, dry interior submersible that is carried to a deployment area by specially configured 688-class submarines. ASDS is intended to provide increased range, payload, on-station loiter time, endurance, and communication/sensor capacity over current submersibles. The 65-foot-long, 8-foot-diameter ASDS is operated by a two-person crew and includes a lock out/lock in diving chamber. SOCOM is the resource sponsor and provides the requirements and funding, and the Naval Sea Systems Command—the Navy’s technical expert for major undersea systems—is the program manager responsible for overseeing the prime contractor, Northrop Grumman Corporation. Over the years, the ASDS acquisition milestone decision authority has resided at various levels within DOD. In 1994, the Navy awarded a $70 million cost-plus incentive fee contract to Westinghouse Electric Corporation’s Oceanic Division in Annapolis, Maryland, for detailed design, construction, testing, documentation, refurbishment, and delivery of the first ASDS with the option to build one or two more systems. In 1996, Northrop Grumman bought this division and assumed responsibility for the Annapolis division’s performance on the ASDS contract. In December 2005, ASDS program management lead was reassigned to Northrop Grumman in Newport News, Virginia, which has greater technical experience in submarines, and Northrop Grumman Electronic Systems in Annapolis is assisting. The original program’s schedule called for delivery of the first boat in July 1997. However, numerous technical problems with key subsystems contributed to performance shortfalls, schedule delays, and cost increases. In August 2001, the Navy program office took what it called “conditional” preliminary acceptance of the first boat from Northrop Grumman under an agreement that all requirements needed for final acceptance would be completed within 1 year, requirements that the contractor was unable to accomplish. On June 26, 2003, the Navy elected to accept the ASDS boat in an “as is” condition, and incorporated additional waivers, deviations, and engineering change proposals into the contract. As a result, acceptance of the ASDS boat did not require any additional actions on the part of the contractor. Further, the Navy did not seek any consideration from the contractor because Navy officials believed at the time that the ASDS met virtually all of its requirements. By that time, the total costs for the ASDS development contract had already increased from $70 million to more than $340 million. In October 2003, following the Navy’s acceptance of ASDS, the Navy negotiated and signed a basic ordering agreement (BOA) with Northrop Grumman to provide a range of goods and services to support the ASDS program. For example, the BOA enabled the Navy to order engineering and design services; overhaul, repair, and inspection services; logistical support; and spare parts and materials for a 3-year period. The BOA was extended an additional year in 2006. To expedite the contracting process, the BOA established specific labor rates for different types of service, such as program office, technical, engineering, operations, and quality support. Through March 2007, the Navy issued 26 delivery orders with an estimated value of over $84 million. The duration of the current BOA extends through September 2007, and the Navy anticipates awarding a new BOA for another 2 years while overall ASDS performance is reevaluated. Under another BOA, Northrop Grumman is also providing ASDS engineering services, such as engineering changes and drawing updates, for Portsmouth Naval Shipyard. In assessing the ASDS program we drew heavily from our previous work on best practices in defense acquisitions. This work has shown that both a sound business case and effective contracting strategy are essential for success. A sound business case involves firm requirements and mature technologies, a knowledge-based acquisition strategy, realistic cost and schedule estimates, and sufficient funding. An effective contracting strategy involves selecting a contractor with proper expertise, choosing contracting approaches that effectively balance risk, and effectively managing and assessing contractor performance; all of which are intended to promote accountability for outcomes and protect the taxpayers’ interests. Critical flaws in the Navy’s initial business case contributed to ASDS’s acquisition challenges and increased the government’s risk. We have previously reported that the capabilities required of the boat outstripped the contractor’s resources in terms of technical knowledge, time, and money. The Navy’s overly optimistic assumptions about the contractor’s ability to readily incorporate existing submersible and commercial technology into the ASDS resulted in a mismatch between technologies and needed capabilities and an ill-advised decision to combine developmental and operational testing. Further information on the technical, cost, and management issues that undermined the ASDS’s initial business case may be found in appendix I. Navy Assumed Responsibility for ASDS Problems through Its Decisions and Contracting Approach As existing problems mounted during development and new ones arose after acceptance, the Navy increasingly assumed responsibility for resolving them. This responsibility required additional time and money over the targets that had been established by the ASDS development contract. Since accepting the ASDS in June 2003, SOCOM has continued to invest millions of dollars to fix both old and new problems. The prime contractor has had little incentive to control costs given the Navy’s choice of certain cost-reimbursable contract types. Navy officials say they accept more risk of performance because ASDS relies on new, highly technical subsystems that are inherently risky. The Navy’s risk also increased because it authorized work before reaching agreement on key contract terms and conditions and failed to finalize them in a timely manner, indicating a lack of discipline in the contracting process. Over Time, the Navy Assumed the Cost and Responsibility for Correcting ASDS Problems Resolving the flawed initial business case required additional time and money, far exceeding the target cost and delivery time frames established under the ASDS September 1994 development contract. For example, the development contract was awarded for about $70 million with an expected delivery date of the first ASDS boat in July 1997. When the contractor proved unable to meet these time frames, the Navy found itself having to rebaseline the program in 1998 and 1999, more than doubling the estimated development cost and extending the delivery schedule by more than 2 years. Ultimately the development cost almost quintupled. During the course of ASDS’s development, the Navy gradually assumed responsibility for addressing ASDS’s technical problems by awarding separate contracts to other organizations to develop key components. The contractor’s lack of expertise in key technologies, such as the propeller and battery, contributed to the Navy’s decision to seek outside expertise to develop alternative solutions. More information on these actions is provided in appendix I. The Navy finally accepted the first ASDS boat in June 2003 in an “as is” condition. Since the June 2003 acceptance, however, SOCOM has continued to invest millions of dollars to address old and new technical and reliability issues. Through March 2007, the Navy has issued delivery orders with an estimated value of about $84 million under the BOA with Northrop Grumman. Much of the funding has been for efforts to correct design deficiencies and to improve ASDS’s reliability. Navy’s Cost-Reimbursable Contracts Provided Little Incentive to the Contractor to Control Costs Arrangements that appropriately share risk, incentivize performance, and provide for accountability promote successful acquisition outcomes. The government can choose from a range of contract types available to it that gives it flexibility to acquire goods and services. The selection of contract type is generally a matter of risk allocation: fixed-price contracts place the risks associated with performing the contract on the contractor; cost-type contracts share the risk between the contractor and the government. The risk associated with performance shifts between the parties depending on the type of cost contract selected. In selecting the contract type, the government must consider the difficulty of providing the goods and services in the time allocated for contract performance. For example, when the risks are minimal or can be predicted with an acceptable degree of certainty, such as when the government and the contractor have sufficient knowledge of the effort required, then the government uses a fixed-price contract, and the contractor has full responsibility for the performance costs and the resulting profit or loss. In contrast, when the extent of product knowledge is more limited, the government uses a cost- reimbursable contract; the government assumes more risk and may try to motivate the contractor’s performance by using various incentive or award fee provisions. Our review found that nearly all of the $84 million in design, integration, and reliability improvement work authorized under the Navy’s October 2003 BOA with Northrop Grumman used some form of a cost-reimbursable contract. About 6 percent were conducted under a fixed-price type arrangement. Of the first 18 delivery orders issued through early May 2005, 14 were either cost-plus fixed fee or labor-hour orders. Cost-plus fixed fee arrangements negotiate the fee at the inception of the contract and do not vary with the actual costs incurred by the contractor. Labor-hour contracts provide for direct labor hours at specified fixed rates that include wages, overhead, general, and administrative expenses. As profit and other expenses are already included in the rates charged to the government, the orders provided no profit incentive for the contractor to control costs or work efficiently. Correspondingly, our analysis found that the ASDS contractor often exceeded the initial estimates of the time and cost required to complete the work: 12 of the 26 delivery orders issued under the BOA exceeded the initial cost estimates, while the delivery schedule was extended on 20 of the 26 orders. Figure 1 shows the value of all delivery orders and subsequent modifications by contract type through March 2007, based on the year the order was initially issued. Navy officials told us that they chose cost-plus fixed fee or labor-hour orders, in part, because ASDS relied on many new and highly technical subsystems that were inherently risky. The ASDS contracting officer told us that the choice of cost-plus fixed fee or labor-hour orders reflected the perceived risk in the efforts, that is, the technical requirements and the work that needed to be done were not always well-defined or known in advance. Navy officials reported, however, that to get the contractor to more actively manage and be accountable for success, the Navy has increased the use of award and incentive fee provisions on its cost-type orders, placing at least some of the contractor’s potential fee at risk. For example, Navy officials noted that two of the three delivery orders issued in 2006—representing about 80 percent of the value of ASDS work ordered under new delivery orders during the year—contained award or incentive fee provisions. While the Navy officials acknowledged that it was too early to quantify the results of these approaches, preliminary indications are that the contractor’s performance has improved and that the arrangements are providing sufficient risk sharing and monetary incentives to motivate contractor performance. Further, the contracting officer anticipated that the Navy would use more fixed-price arrangements as more experience is developed with ASDS repair and maintenance requirements. Authorizing Work before Reaching Agreement on Key Terms and Conditions Increased the Navy’s Risk Our analysis also found that the Navy often initiated work using undefinitized contract actions; that is, before the Navy and contractor had reached agreement on key terms and conditions of the delivery order, such as the scope of the work to be performed and the price of that work. While this approach allows agencies to begin needed work quickly, it also exposes the government to potentially significant additional costs and risk. For example, in September 2006 we reported on how DOD addressed issues raised by the Defense Contract Audit Agency in audits of Iraq- related contract costs. We found that DOD contracting officials were less likely to remove costs questioned by auditors if the contractor had already incurred those costs while the contract action was undefinitized. Our analysis found that 10 of the 26 ASDS delivery orders—accounting for about 14 percent of the work—were initiated as undefinitized contract actions. In most cases, the Navy justified the use of this approach by stating that the work needed to begin immediately to meet urgent operational requirements. For 7 of these 10 orders, the Navy failed to definitize the orders within the 180-day time frame required under defense acquisition regulations, taking instead from 228 to 509 days. In three cases, the Navy definitized the orders after the work had been completed. The delivery order to replace the ASDS’s hydraulic reservoir illustrates the need to clearly define the scope of the work, provide effective management and oversight, and hold the contractor accountable for outcomes. The delivery order issued to the contractor on June 10, 2005, was a $1.0 million cost-plus fixed fee undefinitized contract to replace the ASDS’s hydraulic reservoir. In October 2005, the contractor reported it would need about $444,000 extra to complete the project. Rather than provide additional funds, the Navy elected to reduce the scope of the work, and the order was definitized on March 1, 2006—nearly 9 months after the work was initially authorized—at a cost of about $937,000. Two days later, the contractor reported that the projected cost of the work had almost doubled to more than $1.85 million. In a letter to the contractor, the Commander, Naval Sea Systems Command, noted that at no time during negotiations had the contractor identified the potential cost growth. Nevertheless, as of December 20, 2006, a further modification to the delivery order increased the estimated cost to $2.8 million and extended the delivery date by 60 days. Navy officials acknowledged that the use of undefinitized contract actions and the failure to definitize them in a timely fashion indicated a lack of discipline in the contracting process, but noted that officials had taken a number of actions to address the issues, including taking more time to define requirements and requiring the contractor to submit more realistic cost and schedule estimates. Furthermore, the Navy has not issued an undefinitized contract action since July 2005. DOD Evaluating Future Options for Meeting Required Capabilities Continuing reliability problems led to a DOD decision to cancel purchases of additional ASDS boats, following on an earlier decision to decertify ASDS for operational test readiness because of considerable performance and reliability issues that required significant additional resources for new development, investigations, rework, and design changes. Instead, DOD directed the establishment of an ASDS improvement program and an assessment of alternate material solutions to fulfill remaining operational requirements. The results of both should allow DOD to make an informed decision as to its future needs by mid-2008. Additional Procurements Canceled Because of Continuing Reliability Problems The Navy decertified ASDS from operational test readiness in October 2005, following a propulsion-related failure during an attempt at follow-on operational test and evaluation. This failure, however, was among a series of performance and reliability issues identified over the course of ASDS development. These performance and reliability problems have required significant additional resources to support new development, investigations, re-work, and design changes. Some changes have not been fully corrected or verified in operational testing. For example, in December 2003, while transporting ASDS mated to the host submarine, severe damage occurred to the ASDS tail section—the propeller assembly, the stator, and the stern planes. The Navy’s investigation attributed the cause to improper maintenance procedures—inadequate assembly by Portsmouth Naval Shipyard personnel. The propeller assembly and stern plane designs were improved and maintenance procedures were changed. In June 2004 testing of repairs, however, the ASDS propeller stator broke off and damaged the propeller. The investigation found that the stator had been improperly manufactured by a subcontractor. The tail damage was repaired by Northrop Grumman at the Navy’s expense. During follow-on test and evaluation in October 2005, ASDS experienced a propulsion system failure that was attributed to improper assembly/installation of the new titanium tail. Because of the investigations of the December 2003 and June 2004 ASDS tail casualties, the Navy re-evaluated the effects of unsteady hydrodynamic loads on the boat. Although neither casualty was attributed to this type of load, the Navy determined that, due to fatigue stresses, the aluminum tail was not structurally adequate to last the life of the ASDS. The tail was replaced with a titanium and composite-based tail, but the replacement has not resolved all the tail assembly design deficiencies. To minimize the potential for damage to the tail, the Navy has imposed operating restrictions that limit the speed of the host submarine while transporting ASDS, which will remain in effect until this issue has been resolved. In September 2005, the Navy and SOCOM chartered the ASDS Reliability Action Panel (ARAP)—consisting of technical experts from government and industry—to conduct an independent assessment of reliability. After the 2005 propulsion system failure, the ARAP was asked to assess ASDS’s readiness to resume testing. ARAP’s report indicated that there were numerous examples of unpredicted component reliability problems and failures resulting from design issues, and recommended not resuming testing until detailed reviews of mission critical systems were completed. In November 2005, SOCOM restructured the ASDS program to focus on improving reliability of the existing boat before investing in additional boats. The existing boat is currently available only for limited operational use. In April 2006, DOD canceled plans to procure follow-on ASDS boats and directed the Navy and SOCOM to (1) establish an ASDS-1 improvement program to increase the performance of the existing boat to the required level, to insert technologies to avoid obsolescence, and to complete operational testing and (2) assess alternate material solutions to fulfill remaining operational requirements. In May 2006, DOD reported to the congressional defense committees that the first ASDS would be maintained as an operational asset, and that an ASDS improvement program was planned through fiscal year 2008. As currently structured, the ASDS reliability improvement program includes four elements ASDS Phase 1 and Phase 2 critical systems reviews, reliability builds or upgrades, and verification testing. The results of the Phase 1 critical systems review are due in June 2007 and are expected to include prioritized corrective actions and associated cost and schedule estimates. According to Navy officials, the Phase 1 results are expected to identify critical upgrades to improve reliability and make ASDS-1 a viable operational asset. At-sea tests to verify that corrections result in improved performance and reliability are being conducted. In October 2006 ASDS completed a successful 2-week underway period operating from a host submarine to verify and test repairs that were made to the propulsion system. In February and March 2007, following installation of 15 reliability improvements, including a newly designed hydraulic reservoir and environmental control unit, ASDS verification testing was conducted. This testing consisted of nine underways for a total 113 operating hours. According to SOCOM, there were no failures. Follow-on operational test and evaluation is scheduled for the second half of fiscal year 2008. It is not certain, however, the extent to which the upgrades identified by the Phase 1 critical systems review will be incorporated into the ASDS for this operational test. DOD Expects to Make a Program Decision in mid-2008 DOD also directed the Navy and SOCOM to conduct an assessment of alternate material solutions to fulfill remaining operational requirements. An independent cost and capability trade study is under way for the purpose of developing models for both the ASDS and a hybrid combatant submersible to support concept design-level trade studies. A final report is expected by the end of June 2007. SOCOM has completed a requirements analysis that identified undersea clandestine maritime mobility gaps for special operations forces insertion and extraction as well as the conduct of undersea tasks. According to SOCOM, in February 2007, it submitted a memorandum on these issues to DOD’s Joint Staff for submission to the Joint Requirements Oversight Council (JROC). Upon JROC approval, the memorandum is expected to serve in-lieu of an Initial Capabilities Document for use in the alternate material solutions analysis. This process is similar to an analysis of alternatives and is expected to assess a broad range of potential material solutions. The joint Navy-SOCOM alternate material solutions analysis is expected to be completed by February 2008. A program decision is planned in mid-2008, after the ASDS improvement program and alternate material solutions analysis are completed. According to SOCOM and Navy officials, the results of the alternate material solutions analysis, in conjunction with the operational testing of the changes made in response to the reliability improvement program, should provide DOD by mid-2008 with sufficient information to make an informed decision on the direction DOD should take to meet its operational needs. Conclusions Had the original business case for ASDS been properly assessed as an under-resourced, concurrent technology, design, and construction effort led by an inexperienced contractor, DOD may have adopted an alternative solution or strategy. Ironically, after having invested about $885 million in nearly 13 years, DOD may still face this choice. As to lessons learned, DOD’s actions to make the boat operational came at great expense to the government. Further, DOD’s inadequate program and contract management in essence made the prime contractor’s poor performance acceptable. These actions underscore the need to have a sound business case at the start of a program, coupled with an acquisition strategy that enables the government to alter course as early as possible. Instilling more discipline into the contracting process is a step in the right direction, but its success hinges on DOD’s willingness to hold the contractor accountable. From this point forward, DOD will be conducting reviews and testing to guide its decisions on how to proceed with the first ASDS boat. It is important that DOD be guided by sound criteria and a sound contracting strategy as it makes these decisions. Recommendations We are making three recommendations. In order to prevent the government from accepting additional undue risks and expense on ASDS, the Secretary of Defense should: Establish acceptable cost, schedule, and performance criteria, based on fully defined scopes of work, and assess the boat’s ability to meet these criteria at the Phase 1 and Phase 2 critical systems reviews and at the management reviews. If, by the time of the program decision in mid-2008, ASDS does not meet acceptable cost, schedule, or performance criteria, we recommend that the Secretary of Defense discontinue the effort and not proceed with further tests. Ensure that, if the review results meet acceptable cost, schedule, and performance criteria, the design changes resulting from the Phase 1 critical systems review essential for demonstrating ASDS reliability and maintainability be incorporated in sufficient time to be tested under operational conditions prior to the planned mid-2008 decision on how to best meet special operations forces’ requirements. Require the Navy to include provisions in the ASDS contracting strategy chosen when the existing BOA expires that (1) appropriately balance risk between the government and the contractor through the contract types selected, (2) incentivize the contractor’s performance and promote accountability for achieving desired outcomes by properly structuring the award and incentive fees, and (3) provide the kind of management and oversight of the program necessary to hold the contractor accountable for performance. Agency Comments and Our Evaluation DOD partially concurred with our first two recommendations that it establish acceptable cost, schedule, and performance criteria for ASDS-1; assess the boat’s ability to meet these criteria; and test design changes. DOD concurred with our third recommendation on the Navy’s contracting strategy to balance risk between the government and contractor; properly structure award and incentive fees to incentivize contractor performance and promote accountability; and provide necessary management and oversight to hold the contractor accountable. DOD’s written comments are reprinted in appendix II. In partially concurring with our first recommendation, DOD commented that under its new ASDS management plan, program decisions will be made through management reviews using specified evaluation criteria and not solely at the completion of the critical systems reviews. The Navy provided a copy of its March 6, 2007 management plan for ASDS-1 improvement. This plan represents a positive step in establishing a structured strategy for the ASDS-1 improvement program, including defining management oversight—roles, responsibilities, and authorities— and providing specific criteria to guide the program’s continuation or termination decisions. However, the criteria may not go far enough. Specifically, the criteria may not be sufficient for making an informed program decision—the scope of the proposed ASDS’s critical systems upgrades may not be fully defined and realistic cost and schedule estimates may not be developed before the ASDS improvement effort is approved to proceed. Further, the management plan does not address the Phase 2 critical systems review decision. We have clarified this recommendation to incorporate the management program reviews and decisions and added language to focus more directly on the need for fully defined scopes of work. Fully defining the scopes of work is key to realistic cost and schedule estimates. DOD partially concurred with our second recommendation, but took issue with operationally testing all Phase 1 critical systems review design changes before the planned mid-2008 decision. DOD stated that there are identified changes that will take more time and that a decision on what changes to implement will depend on various factors such as time, funding, and scope. However, it remains unclear the extent to which upgrades that affect performance will be incorporated and tested prior to the mid-2008 program decision. We modified the wording to require testing essential design changes prior to a 2008 decision. DOD also provided technical comments, which we have incorporated as appropriate. Scope and Methodology To assess the ASDS contracting strategy, we reviewed the ASDS acquisition strategy, program documents, contract documentation, and numerous historical documents, including the Navy’s 1997 Independent Review Team assessment, the joint Navy/SOCOM 1999 Independent Review Team assessment, and the ASDS Reliability Action Panel’s 2006 report. In our assessment of ASDS, we drew upon our large body of previous work on best practices for developing products and developing sound business cases. To determine the status of major ASDS technical issues and program restructuring, we examined program status documents and briefings, test results, technical reports, and various memos and guidance. We did not assess the appropriateness of accepting the first ASDS boat in an “as is” condition. In performing our work, we obtained information and interviewed officials from the U.S. Special Operations Command; the Naval Sea Systems Command’s ASDS program and contracting offices; and the Navy’s Operational Test and Evaluation Force. We conducted our review from July 2006 to April 2007 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretary of the Navy; the Commander, U.S. Special Operations Command; the Director of the Office of Management and Budget; and interested congressional committees. We will make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or by email at francisp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contributors to this report include Catherine Baltzell, David Best, Timothy DiNapoli, David Hand, John Krump, Mary Quinlan, and Robert Swierczek. Appendix I: Mismatches in Technology, Resources, and Managerial Capacity Undermined Key Business Case Assumptions Putting a development program on sound footing from the beginning requires that the selected technology be capable of meeting the government’s requirements and able to be developed within needed time frames and available resources. Further, the contractor must have the technical and managerial capacity to effectively execute the contract, while the government must be able to provide effective program and management oversight. On the ASDS program, however, these conditions were not present at the start of or effectively applied during the development effort, undermining the ability to successfully design and deliver an operational ASDS boat. Technology Assumptions Optimistic A key to promoting successful acquisition outcomes is matching available resources with the requirements for the proposed system. Specifically, the government must match its needs with technology that has been proven to work in a realistic environment before committing to production. In this case, the Navy assumed that the conceptual design was technically sound and that the design would incorporate a large amount of fully developed submersible or commercially available technology. The Navy’s September 1993 acquisition strategy concluded that the low risk of integrating technologies already in use on existing submarines and submersible vehicles eliminated the need for an advanced development model or a demonstration/validation phase with developmental and operational testing. Further, the Navy determined that by concurrently addressing manufacturing and test issues during the design process, lengthy redesign periods would be avoided. Consequently, in September 1994, the Assistant Secretary of the Navy for Research, Development, and Acquisition (the designated program decision authority) approved Milestone II (development) and replaced a sequential test program (development tests, operational tests, technical evaluations, and operational test and evaluation) with a consolidated and integrated test program. At the same time, the ASDS program’s Milestone III (production decision) was waived because of the limited number of procurement quantities. The Navy’s confidence in the maturity of technology also played a large role in its assessment of proposed designs for the ASDS, and in turn, in its selection of the contractor. The Navy concluded that the contractor’s conceptual design exceeded various requirements, and, based on its maturity, the proposed design approach was low risk. From the outset, the Navy’s assessments of the contractor’s design solution, experience, and management capabilities proved incorrect. Incorporating commercial off the shelf components into the ASDS was more challenging than expected. For example, the contractor had difficulty understanding underwater shock performance requirements and eventually subcontracted the shock design efforts to a specialty firm. During the course of ASDS’s development, the Navy gradually assumed responsibility for addressing ASDS’s technical problems by awarding separate contracts to other organizations to develop key components. The contractor’s lack of expertise in key technologies, such as the propeller and battery, contributed to the Navy’s decision to seek outside expertise to develop alternative solutions. In turn, the Navy provided these components to Northrop Grumman as government-furnished equipment, accepting both the cost and the risk for their performance and paid Northrop Grumman millions of dollars to integrate the components onto the ASDS boat. These actions include the following examples: The ASDS program has invested over $26 million since 2000 to design, develop, and integrate a new lithium-ion battery to replace the inadequate silver-zinc battery provided by the prime contractor. In October 2000, the Navy awarded Northrop Grumman a $2.1 million contract modification to design, develop, test, and incorporate a lithium-ion polymer battery. By September 2003, a series of contract modifications had increased the cost of the prototype battery effort to $5.9 million and had extended delivery until February 28, 2004. The Navy sought other experts to identify and test an alternative lithium-ion battery that could be housed in the existing ASDS titanium battery bottles. In May 2004, after evaluating three proposals, the Navy awarded Yardney Technical Products a $9.3 million contract for a complete ASDS shipset battery that was delivered in 2005. To date, the Navy has provided Northrop Grumman more than $6 million to integrate the lithium ion battery. The Navy invested over $1.5 million to redesign the first ASDS propeller, which was a major source of noise during testing. Rather than task Northrop Grumman to redesign the propeller, the Navy awarded a $1.5 million contract in 2002 to Pennsylvania State University’s Applied Research Laboratory to design and build a new composite propeller. Northrop Grumman installed this propeller in April 2003 at a cost of about $140,000. Pennsylvania State University has since provided two additional propellers at a cost of about $576,000. Initial Cost Projections Harbingers of Difficulties to Come Another key to successful acquisition outcomes is to accurately estimate the resources needed to develop and produce a system. The Navy had information before awarding the ASDS contract indicating, however, that the contractor’s proposed price might not be realistic. Specifically, the contract’s negotiated price was about 60 percent less than the Navy’s November 1993 cost and operational effectiveness analyses. The Navy’s price evaluation team concluded that the contractor’s proposed amounts for ASDS development and production were underestimated and that overruns were likely. Among the lessons learned cited by two independent review teams in 1997 and 1999 were that the program was underfunded, in part because the Navy did not give sufficient weight to concerns raised by cost analysts, and that the contractor “bid to the budget.” Government and Contractor Management Was Ineffective The government’s and contractor’s capacity to effectively manage a program is another key determinant in promoting successful outcomes. The Navy concluded in 1994 that overall, the contractor’s design, management capabilities, and cost control capabilities were equal to or better than the two other competitors for the ASDS program and that the contractor had adequate experience in submersible design, construction, and certification. This assessment, as well as the government’s capacity to provide effective management and oversight of the ASDS program, soon proved incorrect. The Navy’s 1997 and the joint Navy/SOCOM 1999 independent review teams identified weaknesses in the contractor’s capacity to effectively address technical issues and manage the ASDS program. One team noted that the contractor had considerable difficulty in interpreting the underwater shock portion of the ASDS performance requirements. The teams attributed these difficulties, in part, to the contractor’s lack of experience in submarine design, in contrast to the initial business case assumption. Further, the reviews noted that the Navy’s review of the contractor’s design products revealed that substandard design methodology was used, resulting in unacceptable system design. The review teams also found that this lack of experience had a detrimental effect on the contractor’s overall ability to understand technical nuances and may have prevented the contractor from applying appropriate management attention when needed. For example, the contractor used two different systems for reporting and managing the program; the contractor’s cost reports contained errors; and its estimates to complete the effort were updated only every 6 months, resulting in unanticipated and sudden cost increases being reported to the Navy. Additionally, the contractor constrained its estimates by imposing “management challenges,” which the team concluded were in reality artificial reductions imposed by the contractor to obscure the contractor’s problems and mislead attempts to analyze its projected costs. Further, the review teams concluded that lapses in effective management by both the government and the contractor contributed to the program’s challenges. The teams identified several causes for these lapses, including a lack of contractor experience in submarine design and construction; the government’s lack of influence or visibility into problems between the contractor and the subcontractors; a focus on technical rather than management aspects of the program by both the program office and the contractor; ineffective oversight by the program office and little attention to the financial performance of the contractor; and frequent changes in the contractor’s project management team. The Navy program office and Northrop Grumman have taken steps to improve the program’s management. In 2005, the Naval Sea Systems Command reorganized the program office for a greater emphasis on special operations programs. In December 2005, Northrop Grumman reassigned the ASDS program’s management lead to its Newport News division, which has greater management and technical experience in submarines. Northrop Grumman Newport News is leading the Phase 1 ASDS critical systems review, and Northrop Grumman Electronic Systems is assisting. Appendix II: Comments from the Department of Defense
The Advanced SEAL Delivery System (ASDS) is a hybrid combatant submersible providing clandestine delivery and extraction of Navy SEALs and equipment in high-threat environments. The first ASDS has had significant performance issues and has cost, to date, over $885 million. In May 2006, Congress requested that GAO review ASDS. This report examines (1) how the Navy managed ASDS risks through its contracts and (2) the status of major technical issues and program restructuring. The Navy did not effectively oversee the contracts to maintain, repair, and upgrade the ASDS and failed to hold the prime contractor accountable for results. The Navy took responsibility for correcting the boat's deficiencies while continuing to pay the costs and fees of the prime contractor under cost reimbursable contracts to execute the corrections. Before accepting the boat, the Navy went to sources other than the prime contractor to obtain better designs for the propeller and battery and then paid the prime contractor to install them. When the Navy accepted the ASDS in 2003 in an "as is" condition, it relieved the contractor from having to take any additional actions to correct known problems. Since then, the U.S. Special Operations Command has continued to invest millions of dollars to fix existing problems and address new ones in an attempt to make the boat operational. In making this additional investment, the Navy entered into contracts with the prime contractor that provided little incentive to control costs, authorized work before reaching agreement on the scope and price of the work to be performed, and failed to finalize the terms of the work within required time frames. Meanwhile, the contractor's performance continued to be poor, often exceeding initial estimates for the time and cost required to perform the work. ASDS officials took actions over the past 2 years to address these issues, but acknowledge that it is too early to determine the effectiveness of more recent actions to incentivize the contractor's performance. Continuing problems with the existing ASDS led to the Department of Defense's (DOD) April 2006 decision to cancel plans to buy additional ASDS boats, establish an improvement program for the in-service ASDS, and conduct an assessment of alternative material solutions to fulfill remaining operational requirements. The problems have seriously degraded the boat's reliability and performance, and the boat is only available for limited operational use. The results of these improvement and assessment efforts are expected to provide DOD the knowledge needed to determine whether ASDS's reliability can be improved cost-effectively to make ASDS an operational asset and whether an alternative development program is needed to meet the remaining operational requirements. A program decision is planned in mid-2008, after the ASDS improvement program and assessment of alternate material solutions are completed.
Background Wetlands are diverse, but they can generally be defined as transitional areas between open waters and dry land, such as swamps, marshes, bogs, and similar areas. Wetlands are typically characterized by the frequent or prolonged presence of water at or near the soil surface, by soils that form under flooded or saturated conditions, and by plants that are adapted to life in these types of soils. Figure 1 is an example of a type of wetland. Federal authority over wetland development is exercised through section 404 of the Clean Water Act. The section 404 program requires that, unless exempted, anyone wanting to discharge dredged or fill material in navigable waters of the United States, which include most wetlands, obtain a permit from one of the 38 Corps district regulatory offices. This permitting process provides the Corps with a mechanism for enforcing mitigation efforts. Also, section 404 (b)(1) authorizes the Administrator of EPA, in conjunction with the Secretary of the Army, to develop guidelines the Corps uses in the permit process. Under these guidelines, developers must first avoid and then minimize adverse impacts to wetlands to the extent practicable and then compensate for any unavoidable impacts. The objective of mitigation is to compensate for adversely affected wetlands. In 1989, the Bush administration established the national goal of “no net loss” of wetlands. Subsequently, the Clinton administration expanded the goal to achieve a net increase of 100,000 acres per year by 2005. In addition, a 1990 memorandum of agreement between the Department of the Army and EPA, addressing mitigation under the Clean Water Act, states that the Corps will strive to achieve a goal of no overall net loss of wetland functions and values. Wetland functions include controlling floods and erosion, purifying water, and providing habitat for numerous bird and fish species. Wetland values are the economic and social benefits derived from wetland functions, including food, timber, improved water quality, and recreation. Agencies agree that it is difficult to measure the success of efforts to mitigate adverse impacts to functions and values, at least in part, because of variations across the country in wetlands. Under mitigation banking guidance issued in 1995 and in-lieu-fee guidance issued in 2000, mitigation banks and in-lieu-fee organizations should have formal, written agreements with the Corps, developed in consultation with the other agencies, to provide frameworks for the mitigation banking and in-lieu-fee options. These agreements are to include financial assurances and provisions for long-term management and maintenance of mitigation projects. In addition, the 1995 and 2000 guidance clarifies that responsibility for ecological success of mitigation efforts should rest with the mitigation bank or in-lieu-fee organization. However, neither guidance provides ecological success measurements. The agencies plan to review the use of the in-lieu-fee guidance by the end of October 2001. In-Lieu-Fee Mitigation Option Available in 17of 38 Corps Districts As of September 30, 2000, 17 of the 38 Corps districts had established in- lieu-fee arrangements, and officials from 8 additional districts informed us that they were planning to establish such arrangements. The 17 districts with an in-lieu-fee option had developed a total of 63 arrangements. Seven districts had established 1 arrangement, 9 districts had established between 2 and 5 arrangements, and 1 had established 27 arrangements. While the first arrangement was established in Vicksburg, Mississippi, in 1987, most were developed since January 1997. The arrangements have been designed predominantly to restore, enhance, and/or preserve wetlands, with some arrangements also allowing for the creation of wetlands. Developers have used in-lieu-fee arrangements to mitigate adverse impacts for over 1,440 acres of wetlands involving hundreds of projects, often smaller than an acre. Fees collected by in-lieu-fee organizations total over $64.2 million, with the individual arrangements involving total fees ranging from about $1,200 to $24.7 million. Acres adversely impacted and dollars collected are understated because some districts did not report data, sometimes because they did not track the data or it was not readily available. Figure 2 shows the number of in-lieu- fee arrangements in each Corps district. Appendix I provides information on the individual in-lieu-fee arrangements by district. Corps and EPA officials told us that they typically approve the use of in- lieu-fee arrangements as mitigation measures for minor impacts and when adversely affected acreage is relatively small. Officials explained that when developers make such arrangements, the Corps is able to operate more efficiently than when developers perform their own mitigation because the in-lieu-fee option results in fewer, more consolidated mitigation sites, thus reducing the Corps’ oversight burden. In addition, Corps officials said that using the in-lieu-fee option often gives permittees a less cumbersome alternative to performing their own mitigation, which requires submittal of a plan and schedule detailing their mitigation effort. Further, other federal agencies and others agree that the in-lieu-fee option services as a useful mitigation tool. EPA, FWS, and NOAA officials, as well as mitigation bank officials, however, expressed concern about the in-lieu-fee option. They questioned whether in-lieu-fees are being used in a timely manner and appropriately and whether adequate monitoring of mitigation efforts is taking place. These concerns are not unfounded. Corps officials from 11 of the 17 districts reported that they did not require in-lieu-fee organizations to spend or obligate fees received from developers within a specific time frame. In addition, three districts have some in-lieu-fee arrangements that have been in existence since 1997 or earlier and, as of September 2000, had not spent or obligated any funds that directly mitigated adverse impacts. In districts where in-lieu-fees have been spent, arrangements in three districts have used funds for activities, such as research and/or education, that do not directly mitigate adverse impacts. Appendix II, which summarizes Corps district responses to our survey questions, provides detailed information about in-lieu-fee arrangements. In order to address concerns about the in-lieu-fee mitigation option, the Corps, EPA, FWS, and NOAA developed guidance that became effective in October 2000. Under the new guidance, in-lieu-fee organizations have a time frame for initiating physical and biological improvements of mitigation sites and are precluded from using fees for activities such as research and education. In addition, the Corps is required to track all uses of in-lieu-fee arrangements and to report those figures by public notice on an annual basis. Effectiveness of In- Lieu-Fee Mitigation Is Uncertain The effectiveness of in-lieu-fee mitigation is unclear. Information provided by Corps district officials during our telephone survey was not always consistent with written data provided by those districts. Corps officials in 11 of the 17 districts with the in-lieu-fee option stated that the number of wetland acres restored, enhanced, created, or preserved by the in-lieu-fee organizations equaled or exceeded the number of wetland acres adversely affected. However, our analysis of written data submitted by those 11 districts showed that the data from only 5 supported the statements. In- lieu-fee organizations in 3 of the 11 districts had not performed mitigation at a level that equaled or exceeded the adversely affected acreage. Another 3 of the 11 districts were not able to provide data on the number of wetland acres that had been restored, enhanced, created, or preserved by in-lieu-fee organizations in their areas. For the remaining 6 of the 17 districts, officials from 1 district stated that the restored, enhanced, created, or preserved acres had not equaled or exceeded the adversely impacted acres, and officials from 5 districts stated that the question was not applicable to their respective districts because the in-lieu-fee organizations had not started any projects. These five districts reported having 12 in-lieu-fee arrangements, including 3 that had been operating for less than a year and, consequently, had had little time to collect fees or initiate projects. When asked about functions and values of wetlands, officials in 9 of the 17 districts reported that the functions and values lost from the adversely affected wetlands were replaced at the same level or better through the in- lieu-fee organizations’ mitigation efforts. However, officials in five of those nine districts also stated that they had never taken steps to determine whether in-lieu-fee organizations’ mitigation efforts have been ecologically successful, which calls into question their determinations about functions and values. Of the eight districts that did not report functions and values at the same level or better, one reported that the functions and values had not met or exceeded the functions and values of the adversely affected wetlands; two did not know whether the functions and values of the mitigated wetlands met or exceeded the functions and values of the impacted wetlands; and five reported that the in-lieu-fee organizations had not started any projects that could be assessed. Seven districts have taken steps to determine whether in-lieu-fee organizations’ mitigation efforts have been ecologically successful, and officials in these districts reported that they use a variety of techniques. While some districts use performance standards such as the percentage of vegetative survival, other districts review reports, visit mitigation sites, and/or use specific techniques such as the Wetland Rapid Assessment Procedure. Officials in some of the districts that have not taken steps to determine the success of in-lieu-fee organizations’ mitigation efforts assume that if the acreage requirement is met, then functions and values will follow suit. In addition, although most districts reported that they monitor the use of in-lieu-fees, several district officials also explained that their monitoring is limited by resource constraints. For example, officials in one district we visited explained that they receive monitoring reports from the in-lieu-fee organization but do not have time to review the reports. Furthermore, officials from several other districts told us that no significant monitoring efforts are needed because they consider in-lieu-fee mitigation to be a success as soon as the developer pays a fee to the in- lieu-fee organization, even if no mitigation has been performed, or because they use organizations that they trust will do adequate mitigation. NOAA officials said that using such organizations is not an adequate substitute for monitoring. Recent Guidance May Affect Competition Between In-Lieu-Fee Organizations and Mitigation Banks in Districts With the In- Lieu-Fee Option In-lieu-fee organizations have the potential to compete with mitigation banks for developers’ mitigation business to the extent that organizations and banks provide similar mitigation services and serve the same geographic areas. In those areas with the potential for competition, some mitigation bank officials raised concerns about their ability to compete with in-lieu-fee organizations because, for example, in-lieu-fee organizations generally did not have requirements, such as securing financial assurances, that increased mitigation banks’ operating costs. The October 2000 guidance not only established such requirements for in-lieu- fee organizations, but also generally gives preference to mitigation banks. At the time of our survey, in-lieu-fee organizations and mitigation banks potentially could compete in 12 of the 17 Corps districts where in-lieu-fee mitigation was an option, according to information provided by Corps district officials. In-lieu-fee organizations had funded or planned to fund wetland restoration, enhancement, and preservation and/or other mitigation services in all 17 districts with the in-lieu-fee option. Mitigation banks provided restoration and other mitigation services similar to those provided by in-lieu-fee organizations in only 13 of the 17 districts, however, and mitigation banks served the same geographic area as in-lieu- fee organizations in only 12 of those 13 districts. In 3 of the 12 districts where in-lieu-fee organizations potentially could compete with mitigation banks, the in-lieu-fee organizations’ service areas covered an entire state, and in 9 of the districts, the area of potential competition was limited to a watershed, county, or other geographic area. No potential for competition existed in the remaining five districts because no mitigation banks existed in one, banks and in-lieu-fee organizations did not provide the same services in one, banks had not begun marketing mitigation services in one, and banks and in-lieu-fee organizations served different geographic areas in two. Of the 12 districts with the potential for competition, Corps district officials in 9 told us that in-lieu-fee organizations and mitigation banks had competed with each other, and officials in 3 said they had not competed. Corps officials’ views on competition were corroborated by in-lieu-fee organization and mitigation bank officials in the districts we visited. For example, in the Chicago district, where 1 in-lieu-fee organization and 10 mitigation banks operate in the same six-county area and provide similar services, mitigation bank and in-lieu-fee organization officials agreed that they competed with each other. In the three districts where Corps officials said mitigation banks and in-lieu-fee organizations had not competed, two districts have policies that encourage the use of mitigation banks and, in the other district, competition is expected after the in-lieu- fee arrangements that were established in 2000 take hold. Some mitigation bank officials have raised concerns about competing with in-lieu-fee organizations for several reasons. First, some bank officials have claimed that they are at greater economic risk than in-lieu-fee organizations because banks generally must invest money to establish wetlands before they can sell credits. In contrast, in-lieu-fee organizations often do not establish wetlands until they have collected sufficient fees from developers to cover their expenses. Second, some bank officials said that they have difficulty competing with in-lieu-fee organizations’ prices. For example, top management representing two mitigation banks in Chicago said that, in the Chicago district, the in-lieu-fee organization’s price was lower than the banks’ prices. The fund administrator of the in- lieu-fee organization acknowledged competing with mitigation banks and said that the organization’s price was lower than the prices of some banks and higher than the prices of others. Third, prior to the October 2000 in- lieu-fee guidance, some bank officials questioned whether in-lieu-fee organizations were being held to the same standards as banks. The recent in-lieu-fee guidance gives preference to mitigation banks under certain circumstances. For example, when impacts that require mitigation are outside the service areas of mitigation banks and in-lieu-fee organizations, use of a mitigation bank is preferable to in-lieu-fee mitigation, unless using the bank is not practicable or environmentally desirable. While there is a preference for mitigation banks, the new guidance also allows for flexibility according to officials in the Corps, EPA, FWS, and NOAA. Effectiveness of Ad Hoc Mitigation Unknown Our survey showed that since January 1, 1996, 24 of the 38 Corps districts allowed developers to mitigate adverse impacts to wetlands through ad hoc arrangements. We found no definition of ad hoc arrangements in existing mitigation guidance. As explained earlier, ad hoc arrangements, for purposes of this report, involve mitigation payments from developers to third parties that are neither mitigation banks nor considered by the Corps to be in-lieu-fee organizations. For example, a district office that we visited allowed a developer to partly compensate for the adverse impact at a development site by paying a fee to a nearby landowner to preserve a wooded wetland by placing a restriction on the property to prevent future development. We were unable to determine the number of times ad hoc arrangements were used because not all districts routinely track the data. However, officials we surveyed in several districts told us that they used such arrangements infrequently. Corps district officials did not consider ad hoc arrangements to be in-lieu- fee arrangements for several reasons. Most often they distinguished ad hoc from in-lieu-fee arrangements by the lack of a formal agreement between the Corps district and the ad hoc fund recipient. Officials in some districts also said that ad hoc fund recipients usually perform mitigation at a single site with funds received from one developer for one development project. In contrast, in-lieu-fee organizations usually perform mitigation at one or more sites with funds consolidated from multiple developers for multiple development projects. EPA and Corps headquarters officials, as well as Corps district officials, disagree as to whether ad hoc mitigation is covered by the October 2000 in-lieu-fee guidance. Corps headquarters officials said that ad hoc mitigation is not covered under the guidance. EPA headquarters officials disagreed and said that mitigation is covered by the guidance when a third party other than a mitigation bank performs the mitigation and responsibility for the ecological success is transferred to the fund recipient as a condition of the section 404 permit. However, Corps headquarters officials reported that most permits authorizing ad hoc mitigation arrangements do not transfer responsibility, and Corps officials in 17 of the 24 districts using ad hoc arrangements told us that they never required transfer of responsibility as a condition of section 404 permits. Further, Corps district officials disagree on whether ad hoc mitigation is covered by the 2000 guidance. Officials in 6 districts that use ad hoc arrangements said that ad hoc mitigation is covered by the guidance, while officials in 7 said it is not covered, and officials in 11 said that they did not know whether it is covered by the guidance. Oversight of mitigation efforts performed under ad hoc arrangements was lacking in almost half of the 24 districts using such arrangements. Officials in seven districts said that they had not monitored either the mitigation efforts or use of funds made under ad hoc arrangements, and officials in three others did not know whether such monitoring had occurred. In addition, officials in eight districts said that they had never taken steps to determine whether mitigation efforts performed under ad hoc arrangements had been ecologically successful, and officials in two others did not know whether such steps had been taken. Officials in some districts gave reasons for the limited oversight. For example, officials in four districts said monitoring was unnecessary because developers make payments to organizations that the Corps was confident would use the payments to do adequate mitigation, such as The Nature Conservancy. Further, officials in some districts said that they had limited resources for oversight. Responsibility for the ecological success of mitigation performed by ad hoc organizations is unclear. Of the 24 districts that used ad hoc arrangements, officials in 13 said ad hoc fund recipients were not liable for the failure of their mitigation efforts, while officials in 2 said they were, and officials in 9 said they did not know whether ad hoc recipients were liable. Officials in many of the districts who said that ad hoc fund recipients were not liable also said that developers were responsible for the mitigation efforts. However, federal regulations setting forth requirements for section 404 permits do not require permits to include performance standards for ecological success of mitigation efforts. Consequently, no procedures exist for ensuring the ecological success of ad hoc mitigation efforts. Conclusions In-lieu-fee arrangements have the potential to be an effective compensatory mitigation tool that benefits the environment and provides developers flexibility in meeting their mitigation requirements. It is not clear, however, whether such arrangements have, in practice, been an adequate method for mitigating adverse impacts to wetlands. Corps districts supplied us with contradictory information or were not able to provide us with data to support claims that acreage and/or functions and values of wetlands that had been restored, enhanced, created, or preserved equaled or exceeded those that had been lost through development. In addition, several districts have never taken steps to assess whether in-lieu-fee mitigation has adequately mitigated adverse impacts, and those that did make assessments used varying criteria. Similarly, oversight of ad hoc mitigation has been lacking. Projects, whether performed under in-lieu-fee arrangements or under ad hoc arrangements, must be assessed to determine whether they have been ecologically successful so that corrective action can be taken if necessary. The Corps lacks assurances that mitigation efforts under in-lieu-fee or ad hoc arrangements have been effective, sometimes relying instead on “good faith” on the part of the organizations performing the mitigation. We commend the Corps, EPA, and other agencies for developing and implementing the October 2000 guidance, which provides a framework for in-lieu-fee mitigation. Without such a framework, congressional and agency decisionmakers would be hampered in their ability to make sound management decisions in providing continued stewardship of our nation’s resources. At the same time, the recent guidance does not go far enough either to bring consistency to how determinations of ecological success should be made or to establish appropriate monitoring and oversight activities. Agencies need adequate success criteria in order to measure whether progress is being made toward achieving the national goal of no net loss of the nation’s remaining wetlands. Where states play a significant role in decisions on compensatory mitigation, the agencies could coordinate with them in developing the criteria. Once the agencies establish success criteria for in-lieu-fee arrangements, extending those criteria to all compensatory mitigation options would provide the agencies the opportunity to assess mitigation success more broadly. Further, for purposes of accountability, responsibility for the success of mitigation efforts must be clearly assigned to either the developer or the party performing the mitigation. Recommendations To ensure that in-lieu-fee organizations adequately compensate for adverse impacts to wetlands, we recommend that the Administrator of EPA, in conjunction with the Secretaries of the Army, Commerce, and the Interior, establish criteria to determine the ecological success of mitigation efforts and develop and implement procedures for assessing success. To better ensure the ecological success of mitigation efforts under ad hoc arrangements, we recommend that the Secretary of the Army instruct the Corps to establish procedures to clearly identify whether developers or recipients of funds are responsible for the ecological success of mitigation efforts and, using the same success criteria applicable to in-lieu-fee arrangements, to develop and implement procedures for assessing success. Agency Comments We provided the Department of Defense, EPA, and the departments of Commerce and the Interior with a draft of this report for review and comment. While all four agencies agreed with our recommendation concerning ad hoc arrangements, only EPA and Commerce agreed with our recommendation that EPA, in conjunction with the Secretaries of the Army, Commerce, and the Interior, establish ecological success criteria. In disagreeing with the recommendation, Defense suggested two changes. We disagree with both. First, Defense believes that the Corps, rather than EPA, should have the lead in implementing the recommendation. We continue to believe that EPA should have the lead in implementing our recommendation because section 404(b) of the Clean Water Act authorizes EPA to issue section 404 guidance. Second, Defense stated that, instead of establishing criteria to determine the ecological success of mitigation efforts, because the term “success” is too imprecise and subjective to be consistently and effectively applied, criteria should be established to determine “that wetlands functions have been adequately compensated.” We believe that Defense’s suggestion of adequate compensation is too narrow and does not address the overall national goal of no net loss of wetlands. We continue to believe that establishment of ecological success criteria is not only possible, but essential to determine whether progress is being made toward achieving that national goal through section 404 mitigation efforts. Regarding our recommendation calling for ecological success criteria, Interior stated that it did not agree that the federal agencies should establish national criteria. However, our recommendation does not call for national criteria. We agree with Interior’s comment that criteria are most appropriately developed at the local level, where experienced personnel can work together to develop criteria keyed to local ecosystems or watersheds. The comments of Defense, EPA, Commerce, and the Interior, and our responses to those comments, are included in appendixes III, IV, V, and VI, respectively. Scope and Methodology To obtain information on the extent to which the in-lieu-fee option has been used and been effective in mitigating adverse impacts to wetlands, and on the extent to which in-lieu-fee organizations compete with mitigation banks for developers’ mitigation business, we conducted a two- phase telephone survey of Corps officials from the 38 district regulatory offices. The first phase of the telephone survey was conducted for all 38 Corps districts. We asked Corps officials to provide basic information such as whether their district provides the in-lieu-fee and mitigation banking options for developers, and if so, how many in-lieu-fee arrangements and mitigation banks exist in the district. We also asked Corps officials to provide us with copies of written in-lieu-fee agreements and any guidance concerning in-lieu-fee arrangements. We used the responses and documentation from the first phase to obtain an understanding of the extent to which in-lieu-fee arrangements were being used nationwide and to devise questions for the second phase of the survey. The second phase of the telephone survey consisted of two versions: one for districts that have in-lieu-fee arrangements, and the other for districts that do not have such arrangements. During this second phase of the survey, we asked Corps officials to verify the number of in-lieu-fee arrangements and mitigation banks in the districts and to respond to questions concerning such topics as in-lieu-fee guidance, monitoring and enforcement, ecological success, competition between in-lieu-fee organizations and mitigation banks, and ad hoc arrangements. In developing questions for our telephone survey, we conducted pretests of each version with two Corps district offices. During the pretest, we first asked Corps officials to answer the survey questions. After completing the survey questions, we interviewed the Corps officials to ensure that (1) the questions were clear, (2) the terms were precise, and (3) the survey appeared to be independent and unbiased. We also asked Corps districts to supply, in writing, information not suited for collection by telephone, such as the number of permits issued per district that used in-lieu-fees to satisfy mitigation requirements during fiscal years 1998–2000; the total dollar amounts received by in-lieu-fee organizations; and the number of acres that in-lieu-fee organizations restored, enhanced, created and/or preserved from the time that the in-lieu-fee arrangements were established through fiscal year 2000. Some districts did not provide all the information requested because, for example, the data were not tracked or not readily available. Further, we did not verify the information provided by Corps district officials; however, we corroborated the data with other sources to the extent possible. To better understand in-lieu-fee activities, the effectiveness of in-lieu-fee mitigation efforts, and competition between in-lieu-fee organizations and mitigation banks, we interviewed Corps officials and staff of in-lieu-fee organizations and mitigation banks, and we visited wetland sites in the Corps’ districts of Chicago, Illinois; Savannah, Georgia; and Vicksburg, Mississippi. We judgmentally selected these districts because they had in- lieu-fee arrangements that were established in 1997 or earlier and had mitigation banks available as a mitigation option. For additional perspective, we also interviewed officials from The Nature Conservancy in Arlington, Virginia; the National Mitigation Banking Association; the Corps of Engineers’ Engineer Research and Development Center in Vicksburg, Mississippi; a NOAA office in Charleston, South Carolina; and consulting firms in Savannah, Georgia, and Jackson, Mississippi. In addition, we met with a developer in Jackson, Mississippi, and with officials at a local FWS office in Athens, Georgia. Furthermore, we met with officials from Corps, EPA, FWS, and NOAA headquarters. We conducted our work from May 2000 through April 2001 in accordance with generally accepted government auditing standards. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to other interested parties and make copies available to others who request them. Appendix I: Select Characteristics of In-Lieu- Fee Arrangements, by Corps District Office We identified 17 Corps district offices that have established in-lieu-fee arrangements as a mitigation option for developers whose activities will adversely affect wetlands. The districts have established a total of 63 in- lieu-fee arrangements, ranging from 1 to 27 arrangements per district. Table 1 lists the in-lieu-fee arrangements by district, as well as selected characteristics of each arrangement. Some districts did not provide the information we requested because, for example, the data were not tracked or were not readily available. The October 2000 in-lieu-fee guidance, if properly implemented, requires the Corps to track all uses of in-lieu-fee arrangements and report those figures by public notice on an annual basis. That do not directly mitigate adverse impacts Total 0This is the date of the formal arrangement as reported by the Corps districts and verified, to the extent possible, with written arrangements provided by the districts. It is possible that some arrangements were in effect under general permits prior to the effective date. Arrangements with the effective date listed as “not final” have collected money although no written arrangement has been finalized. This information was not reported by the district because, for example, the data were not tracked or were not readily available. The in-lieu-fee arrangement is solely for stream mitigation: a total of 7,975 linear feet of streams was adversely impacted, 12,000 linear feet are required to have mitigation performed, and a mitigation effort is in progress. Appendix II: Results of Telephone Survey of the Army Corps of Engineers Officials in District Offices With the In-Lieu-Fee Option Appendix III: Comments From the Department of Defense GAO Comments 1. Defense stated that developers’ payments to a consultant to construct off-site mitigation projects may be interpreted as ad hoc arrangements. We agree that confusion exists about ad hoc mitigation, as illustrated in our discussions on the disagreement about whether such arrangements are covered by the October 2000 in-lieu-fee guidance and whether or not ad hoc fund recipients are responsible for the ecological success of mitigation efforts. We revised our report to reiterate that ad hoc arrangements, for purposes of this report, involve mitigation payments from developers to third parties that are neither mitigation banks nor considered by Corps districts to be in-lieu-fee arrangements. We also added that we found no definition of ad hoc arrangements in mitigation guidance. Clarification of in-lieu-fee and ad hoc arrangements is an issue that could be discussed when the agencies hold a stakeholder forum in the summer of 2001 and when the agencies review the use of the guidance by the end of October 2001. 2. We agree that establishing criteria and developing and implementing procedures to determine the ecological success of in-lieu-fee mitigation is a big step that entails more than assuring mitigation compliance. Our report notes that it is difficult to measure the success of efforts to mitigate losses to functions and values because, at least in part, of variations in wetlands across the country. However, like the Environmental Protection Agency, Fish and Wildlife Service, and National Oceanic and Atmospheric Administration, we believe that scientifically sound criteria is essential to determine if the objectives of compensatory mitigation are being fulfilled. We also believe that such criteria is essential to measure whether progress is being made toward achieving the national goal of no net loss of the nation’s wetlands through section 404 mitigation efforts, and that the federal agencies need to work together to establish the criteria. 3. Defense stated that lack of context renders some of our report potentially meaningless or distorted. We disagree. In particular, we do not believe that the example cited by Defense illustrates the lack of context. Our intent in reporting that arrangements in three districts used in-lieu-fees for research and/or education was to illustrate situations where funds were used for purposes that do not directly mitigate adverse impacts. Other agencies have raised concerns about such uses. We did not imply that the districts did not achieve the goal of no net loss of wetlands because they allowed funds to be used for such purposes. In fact, data collected during our survey shows that arrangements in two of the districts that reported using funds for research and/or education also reported that the number of wetland acres for which mitigation had been performed had equaled or exceeded the number of acres lost through development. Moreover, we disagree with Defense’s position that as long as the mitigation provided meets the compensatory need, excess funds belong to the in- lieu-fee administrators to expend as they see fit. This position contradicts the October 2000 in-lieu-fee guidance that states that in- lieu-fee funds should not be used to finance nonmitigation programs and priorities, such as research and education. 4. Defense stated that our report does not describe how different requirements for institutional arrangements can affect the Corps’ roles and responsibilities for in-lieu-fee arrangements. During our audit work we identified instances where in-lieu-fee arrangements were based on Programmatic General Permits developed by the Corps. However, our report does not focus on specific details of individual in- lieu-fee agreements or the relationships between Corps districts and individual arrangements. Rather, our report provides a broad overview of the in-lieu-fee program across the country. In addition, the October 2000 in-lieu-fee guidance should cause agreements between Corps districts and in-lieu-fee organizations to be more consistent. 5. We revised the report to show that the information regarding adversely affected acres and dollars collected covered the periods from the time each in-lieu-fee arrangement was established through fiscal year 2000. 6. While we did not define the term “ecological success,” our recommendation calls for the agencies to establish criteria to determine ecological success. Defense also stated that the scientific and professional disciplines associated with wetlands management do not possess sufficient information to make a definitive statement about ecological success. While recognizing that assessing wetland functions and values is difficult, we believe that it is possible to establish scientific criteria to determine ecological success. EPA, FWS, and NOAA officials told us during our audit work that assessing ecological success is possible. In fact, NOAA officials said that there are many existing methodologies from which agencies can draw to establish criteria for ecological success. Further, the Corps’ Engineer Research and Development Center has developed information, such as Hydrogeomorphic Approach data that can be used to assess functions of some regional wetlands, that may be useful for establishing criteria. 7. We revised the report to show that some of the arrangements in the five districts that have not started any projects have been operating only a short period of time and, consequently, have had little time to collect fees and initiate projects. 8. Defense stated that local agency rules and/or in-lieu-fee structure and protocol may result in constraints in terms of use of compensatory mitigation options that override other decisions and could affect competition between in-lieu-fee organizations and mitigation banks. As explained in our response to Defense comment 4, our report provides a broad overview of the in-lieu-fee program rather than specific details about individual arrangements. Moreover, during our audit work, none of the districts that reported no competition between in-lieu-fee organizations and mitigation banks said that local rules and/or in-lieu-fee structures or protocols had constrained competition. 9. We revised the footnote to show that state trust funds, such as those in Pennsylvania and Maryland, are included as ad hoc arrangements in our report. Corps districts reported to us those instances where developers were allowed to compensate for adverse impacts to existing wetlands by paying third parties that are neither mitigation banks nor considered by the districts to be in-lieu-fee arrangements. As explained in our report, ad hoc fund recipients usually do not have a formal agreement with the Corps and typically perform mitigation at a single site with funds received from one developer for one development project. However, ad hoc arrangements identified by Corps districts also include state trust funds, which receive payments from multiple developers that can be used for more than one project. 10. Defense suggested that we change our first recommendation to give the lead to the Corps instead of EPA and to have the federal agencies develop criteria to determine “that wetlands functions have been adequately compensated” instead of criteria to determine the ecological success of mitigation efforts. We disagree and did not revise our recommendation. While we recognize that the Secretary of the Army clearly has authority and responsibility under section 404(a) of the Clean Water Act regarding compensatory mitigation, section 404(b) authorizes the Administrator of EPA, in conjunction with the Secretary, to issue section 404 guidance. Also, we did not change the recommendation regarding the type of criteria because we continue to believe that development of criteria for ecological success is not only possible, but essential. 11. Defense stated that our report appears to concentrate on in-lieu-fee arrangements developed prior to 1997. While we choose to make site visits to districts with arrangements developed in 1997 or earlier because in-lieu-fee organizations in those districts had had more time to develop projects than those in districts with more recent arrangements, our report provides information about all in-lieu-fee arrangements established through fiscal year 2000. We did not include the in-lieu-fee arrangement in St. Louis in our report because, according to a Corps St. Louis District official, it was not established until October 2000, which was after fiscal year 2000 ended. 12. As explained in the Scope and Methodology section of our report, we used implementing agreements to obtain an understanding of the extent to which in-lieu-fee arrangements were being used nationwide and to devise questions for the second phase of our survey. We did not report the extent to which in-lieu-fee arrangements followed their implementing agreements because many, if not all, agreements should change substantially to comply with the protocol in the October 2000 in-lieu-fee guidance. 13. Defense did not concur with our first recommendation. We disagree. See comments 6 and 10. Appendix IV: Comments From the Environmental Protection Agency GAO Comments 1. EPA stated that a useful addition to our first recommendation would be to recognize the significant role that many states play in decisions on wetlands restoration and compensatory mitigation and the close working relationship between EPA and the states in administering the Clean Water Act. EPA also suggested that we recommend coordination with states in developing ecological success criteria. We did not change our recommendation. However, coordination with states in developing ecological success criteria is an EPA prerogative that our recommendation does nothing to preclude. We revised our conclusions to state that where states play a significant role in decisions on compensatory mitigation, the agencies could coordinate with them in developing ecological success criteria. 2. We agree that the same ecological success criteria applicable to in-lieu- fee arrangements should be used to assess ad hoc arrangements’ mitigation efforts, as stated in our second recommendation. Also, we believe that procedures should be developed and implemented to assess the ecological success of ad hoc arrangements’ mitigation efforts. However, the procedures should allow the Corps flexibility in assessing the success of ad hoc mitigation efforts, taking into consideration such factors as available resources as well as potential environmental impacts. Appendix V: Comments From the Department of Commerce GAO Comment 1. Comments provided by Commerce for NOAA stated that NOAA concurs with our two recommendations. The comments also stated that rather than developing procedures for assessing the ecological success of ad hoc mitigation on a district-by-district basis, NOAA believes that Corps headquarters should develop the procedures so that a consistent approach is taken throughout the country. We clarified our recommendation to have the Secretary of the Army direct the Corps, instead of its districts, to establish procedures to clearly identify whether developers or recipients of funds are responsible for the ecological success of mitigation efforts. In addition, Commerce stated that Corps headquarters should resolve widespread confusion about the applicability of the in-lieu-fee guidance by explicitly stating that ad hoc mitigation is covered by the in-lieu-fee guidance. Because, as noted in our report, EPA and Corps headquarters officials disagree as to whether ad hoc mitigation is covered by the October 2000 guidance, we agree that there should be clarification. This is an issue that should be discussed when the agencies hold a stakeholder forum in the summer of 2001 and considered when the agencies review the use of the guidance by the end of October 2001. Appendix VI: Comments From the Department of the Interior GAO Comments 1. Interior stated that full implementation of the October 2000 in-lieu-fee guidance would prevent or rectify many of the problems with in-lieu- fee mitigation programs identified in our report. We agree that full implementation of the guidance may rectify some of the problems we identified. For example, the guidance sets time frames for mitigation efforts and states that in-lieu-fee funds should be used for replacing wetlands functions and values. However, full implementation of the guidance does not address the problems that our recommendations are intended to correct. That is, the guidance does not establish criteria for ecological success and does not clearly establish responsibility for mitigation success performed by ad hoc organizations. 2. Interior said that the Corps should suspend the use of any in-lieu-fee programs found to be in noncompliance with the October 2000 guidance until such time that corrective measures are enacted. We did not develop information on this issue as part of our audit work. However, suspension might be an option that agencies could discuss at the stakeholder forum in the summer of 2001 and when the agencies review the use of the guidance by the end of October 2001. 3. Interior stated that, with regard to our recommendations, it does not agree that federal agencies should establish national criteria to determine the ecological success of mitigation efforts. Our recommendation does not call for national criteria. In fact, our survey showed that over 80 percent of the Corps districts with in-lieu-fee arrangements said that national performance standards for measuring the success of mitigation efforts were not feasible. In addition, we agree that establishing national standards would be infeasible given the diversity of wetland types across the country. Further, we believe that criteria are most appropriately developed at the local level where experienced agency personnel may work together to develop success criteria that are keyed to local ecosystems or watersheds. Appendix VII: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Nancy J. Eslick, June M. Foster, Byron S. Galloway, Lynn M. Musser, and Barbara L. Patterson made key contributions to this report.
More than half the estimated 220 million acres of marshes, bogs, swamps, and other wetlands in the United States during the colonial times, have disappeared, and others have become degraded. This decline is due, primarily, to farming and development. Developers whose projects may harm wetlands must, according to environmental regulations, first avoid and then minimize adverse impacts to wetlands to the extent practicable. If harmful impacts are unavoidable, the developer must compensate by restoring a former wetland, enhancing a degraded wetland, creating a new wetland, or preserving an existing wetland. Such mitigation efforts can occur under the following three types of arrangements: (1) mitigation banks, under which for-profit companies restore wetlands under Army Corps of Engineers agreements and then sell credits for these wetlands to developers; (2) in-lieu-fee arrangements under which developers pay public or non-profit organizations fees for establishing wetland areas, usually under formal Corps agreements; and (3) ad hoc arrangements, under which developers pay individuals or companies to perform the mitigation. This report, determines the extent to which (1) the in-lieu-fee option has been used to mitigate adverse impacts to wetlands, (2) the in-lieu-fee option has achieved its intended purpose of mitigating such impacts, and (3) in-lieu-fee organizations compete with mitigation banks for developers' mitigation business. This report also discusses the use of ad hoc arrangements as a mitigation option. Most of the arrangements were designed to use fees received from developers to restore, enhance, or preserve wetlands, with a few arrangements designed to allow wetlands to be created. During fiscal years 1998 through 2000, developers used the in-lieu-fee option to fulfill mitigation requirements for more than 580 acres of adversely affected wetlands, and paid more than $39.5 million to in-lieu-fee organizations. The extent to which the in-lieu-fee option has achieved its purpose of mitigating adverse impacts to wetlands is uncertain. Although Corps officials in 11 of the 17 districts with the in-lieu-fee option said that the number of wetland acres restored, enhanced, created, or preserved by in-lieu-fee organizations equaled or exceeded the number of wetland acres adversely affected, data submitted by more than half of those districts did not support these claims. Officials in 9 of the 17 districts said that functions and economic values lost from the adversely affected wetlands were replaced at the same level or better through in-lieu-fee mitigation, but officials in more than half of those districts also acknowledged that they have not tried to assess whether mitigation efforts have been ecologically successful. As a result, the Corps cannot be certain that in-lieu-fee mitigation has been effective. Corps district officials in 9 of the 17 districts with the in-lieu-fee option said that organization and mitigation banks were competing with each other by providing similar mitigation services in the same geographic area. No competition existed in 5 of the 17 districts because either no mitigation banks were available, or in-lieu-fee organizations and mitigation banks provided different services, or served different geographic areas. GAO found that hoc arrangements typically were for one-time projects without a formal agreement. Oversight of mitigation affairs was lacking in almost half of the districts using such arrangements. Corps districts disagreed on whether responsibility for the ecological success of ad hoc mitigation rests with the ad hoc fund recipient or the developer.
Background The Federal Acquisition Streamlining Act of 1994 established a micropurchase threshold of $2,500. Purchases that do not exceed the threshold are not subject to the Small Business Act reservation requirement, may be made without obtaining competitive quotations (if the price is reasonable), and may be made by authorized government employees—such as those who will be using the supplies or services—not just by contracting officers. Under the Federal Acquisition Regulation, the governmentwide commercial purchase card is now the preferred method of paying for micropurchases. Further, the purchase card is also authorized to be used in greater dollar amounts to place a task or delivery order under an existing contract (if authorized in the basic contract, basic ordering agreement, or blanket purchase agreement) and to make payments under existing contracts when the contractor agrees to accept payment by the card. GSA administers the purchase card program governmentwide. This program has issued more than 2 million purchase cards to federal employees at government agencies, organizations, and Native American tribes. Purchase card volume increased by almost $1.5 billion, to $13.8 billion, between fiscal years 2000 and 2001. GSA’s master contract for the purchase card program defines the agreement between GSA and the five banks that issue purchase cards to government agencies. The government may exercise the option to renew the contract for up to five 1-year periods beginning in December 2003. In October 2001, GSA requested that the five banks provide data on the socioeconomic status of merchants who did business with the government via purchase cards in fiscal year 2001. An estimated 4 million U.S. merchants accept MasterCard, Visa, or both, and at least 2.1 million of these merchants did business with the government in fiscal year 2001.Because the banks that issue purchase cards do not have access to data on all of the merchants accepting the cards, MasterCard and Visa collected this information on the banks’ behalf, contracting in one case with Austin- Tetra, a private firm, to assist in the task. GSA compiled the information provided by banks and associations in a March 2002 preliminary report. In their efforts to improve the collection of socioeconomic information on purchase card merchants and to track governmentwide small business goals, SBA and GSA are interested in targeting the categories of businesses outlined in table 1. The purchase card transaction process involves the agency cardholder, the merchant and its bank, the payment card associations, and the banks that issue purchase cards to government agencies. When an agency cardholder purchases goods or services from a merchant that accepts MasterCard or Visa, the merchant transmits the transaction to its bank, through the MasterCard or Visa computer systems, to the issuing bank for payment. Figure 1 shows how transaction data are shared between the key players. Socioeconomic data are generally collected after a transaction takes place. The payment card association or its contractor collects socioeconomic information from a variety of sources. This information is appended to transaction data to create reports to GSA and the agencies. Figure 2 shows the key players involved in collecting socioeconomic information on the purchase card merchants. Data Collected to Date Are Inconsistent and Incomplete, but Improvements Are Being Made In response to GSA’s request for fiscal year 2001 socioeconomic data on purchase card merchants, banks and payment card associations reported that they could obtain size or socioeconomic information on about 40 percent of the merchants. They reported that about 50 percent of the purchase card dollars spent with these merchants went to small businesses. However, this information is not useful because the data collected were inconsistent and incomplete, making them unreliable. The lack of clear definitions and guidelines from GSA for the collection of socioeconomic data resulted in inconsistent reporting by the banks and payment card associations. In addition, some available sources of socioeconomic data are incomplete and unreliable. Therefore, at this time, no meaningful conclusions can be drawn about where purchase card dollars are spent or the effect on small businesses of the government use of purchase cards. Drawing on lessons learned in its first attempt at a governmentwide socioeconomic data report, GSA is continuing to work with SBA, DOD, and the private sector to improve the reliability of the data for subsequent reports. Inconsistent Data Due to Lack of Clear Definitions and Guidelines To verify and identify the characteristics of those merchants doing business with the government through purchase cards, a match had to be made between transactional data and the socioeconomic data from government and private databases. However, in its initial data collection effort, GSA did not precisely define the information it was requesting or clearly specify the criteria to be used by the banks and associations as they categorized merchants. Therefore, the data reported to GSA contained widely varying information on the socioeconomic status of merchants. The following are examples of the inconsistencies we found: A payment card association, reporting on behalf of some of the card- issuing banks, reported that it had socioeconomic information for 89 percent of the merchants, while another bank reported that it had this information for 23 percent of the merchants. These differences do not reflect relative success or failure in collecting the information; rather, they were due to varying interpretations of GSA’s guidance. Neither the associations nor the banks reported the number of merchants whose socioeconomic status was unknown. As a result, the information presents an incomplete and misleading picture of the socioeconomic status of purchase card merchants. MasterCard, Visa, and the banks used different methods to classify merchants. One method placed businesses that were corporations but where no socioeconomic data were available in the same category with large businesses. Another method followed SBA standards more closely in categorizing the size of businesses. In one case, GSA’s guidance compounded the problem. GSA instructed banks to use the criterion of 500 employees or fewer to identify small businesses, if no other verification was available, rather than directing them to follow SBA’s guidance that ties size to specific industry classifications. SBA officials, who had not been involved in GSA’s initial data collection effort, raised concerns about this definition and are now providing GSA with assistance in determining appropriate guidelines to categorize the data. An SBA official explained that, in certain industry categories such as construction, using 500 employees or fewer as a criterion would encompass virtually all businesses. Data Are Incomplete and Unreliable No meaningful conclusions can be drawn using the data compiled by GSA for fiscal year 2001, as the reported data are incomplete. The banks and payment card associations were only able to establish merchants’ size or socioeconomic status for about 40 percent of total purchase card dollars because, in some cases, available data sources did not provide complete and reliable information. For example, Pro-Net yielded information on size status for only 9.5 percent of merchants. While some categories of small businesses are required to register in SBA’s Pro-Net in order to be certified, other categories of small business are not required to register. Therefore, businesses requiring certification, such as HUBZone and small disadvantaged businesses, are easier to categorize than businesses for which registration is voluntary, such as woman-owned small businesses. Further, according to industry officials, it is not uncommon for the data in some merchant transaction data fields to contain incorrect information. For example, merchants sometimes place their customer service telephone numbers in the field designated for city so that their telephone number is included on the customer’s credit card statement. In our review of MasterCard and Visa reports and merchant data files, we found obvious errors such as this, as well as duplicate files for the same merchant, the same telephone number for multiple businesses, and missing zip codes. GSA, Agencies, and Private Sector Working to Improve Data Since the spring of 2002, GSA has been working with SBA and other agencies to create more specific guidance for banks and payment card associations. GSA has also included banks and payment card associations in these discussions. GSA’s efforts include defining small business categories, establishing quality standards for data sources, and standardizing reporting. After some initial data have been collected, SBA officials agreed to develop policies for the use of the data in tracking progress towards agencies’ small business goals. According to officials from Austin-Tetra, if definitions and guidelines are agreed upon and adhered to, information about size status may be available for an estimated 65 to 80 percent of merchants. Some officials expressed concern about the potential for double-counting small business dollars if, in the future, purchase card data are automatically transferred to the Federal Procurement Data System (FPDS) and socioeconomic data are applied toward agencies’ small business achievements. If purchase cards are used for payments on contracts or orders that have already been reported to FPDS, double-counting could occur. However, it is not clear that this problem would materialize on a widespread basis. For example, the Director of DOD’s Purchase Card Joint Program Management Office told us that there is little likelihood that DOD’s dollars would be double-counted. Currently, DOD generates automatic reports to FPDS for contracts or orders that are placed through traditional procurement methods such as purchase orders. The official said that it is extremely rare for a purchase card to be used for payments that have already been reported to FPDS. Inherent Challenges Prevent Collection of Socioeconomic Data on All Purchase Card Merchants While GSA’s efforts eventually may enable the government to obtain socioeconomic information on a large percentage of purchase card merchants, inherent challenges suggest that it is not possible to gather complete data on all merchants. Payment card associations’ transaction systems were designed to clear transactions, not to meet the socioeconomic reporting needs of the federal government. The data exchanged during transactions generally focus on information needed to ensure that the merchant is paid and the cardholder’s account is charged. As a result, the infrastructure and processes of the purchase card systems and the legal relationships between the merchants, banks, payment card associations, and the government were not designed to accommodate the collection of socioeconomic data. Purchase Card Master Contract Cannot Ensure the Collection of Socioeconomic Data The master contract between GSA and the five banks that issue purchase cards cannot ensure the collection of socioeconomic information. Although the contract requires the contracting banks to provide transaction data to the government, which might include limited socioeconomic data, banks are only required to provide this information if the merchant provides it and the contracting banks obtain it. The contract clauses referring to reports containing socioeconomic data are vague, both in specifying the data required and in establishing the level of obligation involved. While the contract mentions a report that includes “summary merchant demographic information,” and “size standard,” which “is generally used by the agency/organization in fulfilling its small business and small disadvantaged business goals,” it does not require that the actual size status of the merchant be provided, nor does it expressly require that the reports be provided at all. Rather, in describing the reporting requirements, the contract states that “the Government prefers that the data . . . be provided,” and that “agencies/organizations may choose to receive some or all of reports.” Moreover, there is no contractual relationship between GSA and the merchants’ banks or the payment card associations, the parties most likely to have access to the information. While GSA is currently considering modifications to the master contract with the card-issuing banks to include more specific guidance on reporting socioeconomic data—such as decision rules for data sources and business status—these changes will not alter the fact that the contract can only establish obligations between the parties to the contract. The master contract is only binding on the five issuing banks, which do not have access to information on other banks’ customers and cannot compel the merchants’ banks to provide information on the socioeconomic status of their customers. While payment card associations do have relationships with both the issuing and acquiring banks, and might be better positioned to collect socioeconomic data on behalf of the issuing banks, they are under no contractual or other legal obligation to collect the information, and there are significant practical impediments to doing so. Many Merchants Do Not Provide Socioeconomic Data A purchase card transaction between the government and a merchant does not establish a contractual relationship that requires the merchant to provide socioeconomic data. Further, merchants that are not government contractors have no incentive to report this data if they do not anticipate contracting with the government. Attempts by government agencies and payment card associations to gather missing data through surveys and mailings have been largely unsuccessful. Visa has been involved in two campaigns to collect and update merchant data. According to Visa officials, as recently as last year, Visa mailed half a million letters to merchants requesting socioeconomic information, but less than 2 percent of merchants responded. In January 2000, MasterCard sent out 30,000 letters on behalf of DOD to current DOD suppliers accepting the government MasterCard from DOD buyers. The letter encouraged merchants to update their socioeconomic information with their banks.However, information on only 16 percent of merchants was subsequently updated. Attempts to use government databases are also ineffective due to the relatively small proportion of merchants who have registered in governmentwide databases, such as Pro-Net, or other government databases that are limited to certain agencies (such as the Central Contractor Registration, used for merchants contracting with DOD, the National Aeronautics and Space Administration, and the Departments of Treasury and Transportation). Of the roughly 360,000 vendors with whom DOD uses the purchase card, very few were included in government databases. According to agency officials, merchants may be inclined to register in these databases only if they are trying to win government contracts. Furthermore, Pro-Net relies on merchants to update their own profiles. Of the 173,374 firms registered in Pro-Net as of August 1, 2002, records for only 87,257, or 50 percent, had been updated within the prior 18 months. According to SBA officials, Pro-Net merged with the Central Contractor Registration in October 2002, and SBA purged its system of inactive firms. As of November 1, 2002, there were 91,656 firms in Pro-Net. Merchants’ Banks Do Not Always Collect Socioeconomic Information on Merchants Because the purchase card program only establishes a contractual relationship between the government and the five card-issuing banks, the merchants’ banks are not contractually or otherwise legally required to obtain socioeconomic information about their merchant customers for the purchase card program. Further, according to bank and payment card representatives, banks usually avoid requesting certain customer socioeconomic information because of concerns about client privacy and the prospect of discrimination complaints (should the bank, for example, fail to approve a merchant account). In addition, the bank officials say they do not need socioeconomic data to make a business decision on whether to approve a merchant account. However, both payment card associations have attempted to increase the availability of socioeconomic information on merchants by providing financial incentives, such as lower fees, to merchant banks for collecting this data. Conclusions Although the government likely will never be able to capture complete socioeconomic information on 100 percent of purchase card merchants, the available data can be strengthened to provide more accurate and consistent information that would provide decisionmakers a clearer picture of the extent to which small businesses are receiving federal money through the purchase card program. GSA has made a first step toward understanding the complexities of collecting socioeconomic data on merchants accepting government purchase cards. With the lessons learned from that effort, GSA, with the assistance of other federal agencies and the private sector, can take additional steps toward improving the reliability of the data. Recommendations for Executive Action While the government faces a number of challenges in collecting socioeconomic data on all purchase card merchants, there is an opportunity to improve the available data. Therefore, in order to strengthen the ongoing efforts, we recommend that the Administrator of GSA (1) clarify the socioeconomic information that banks and payment card associations are asked to report and conduct periodic assessments to verify that they are interpreting and reporting the data consistently, and (2) specify a rigorous, disciplined approach to identifying and using appropriate information sources for the socioeconomic data and ensure the participants agree to it. Agency Comments We received written comments on a draft of this report from GSA, SBA, Bank of America, and Austin-Tetra. The Office of Federal Procurement Policy, MasterCard, Visa, Citibank, and US Bank offered oral or e-mail comments. DOD did not provide comments. GSA concurred with our findings and recommendations. GSA indicated that it has begun taking steps to identify and solve problems related to the capture of consistent, accurate, and reliable socioeconomic data, toward a goal of modifying the GSA’s purchase card contract and reporting socioeconomic data to one centralized source, FPDS. GSA reports that it has made significant progress in these areas and states that its progress ultimately implements the recommendations in our report. However, we do not believe that our recommendations have been fully implemented. An October 2002 meeting with industry officials left many issues open—including whether transactions over $2,500 would be reported, how the socioeconomic information would be used, and who would be responsible for reporting to whom. GSA should continue to work with the agencies, banks, and payment card associations to ensure that socioeconomic information on purchase card merchants is accurately and consistently collected and reported. GSA’s letter appears in appendix II. SBA provided technical comments, which we incorporated as appropriate. SBA suggested that we include GSA’s role in figure 1 to show that GSA does not directly influence data collection; however, this graphic was not meant to illustrate the data collection process. Figure 1 depicts the flow of information during a purchase card transaction, a process in which GSA is not involved. Figure 2 illustrates GSA’s role in the data collection process. SBA’s letter appears in appendix III. Bank of America offered written comments to assist in clarifying sections of the report. We incorporated these comments where appropriate. Bank of America expressed concern that there is an expectation of a fully revised report on purchase card merchants’ socioeconomic data for fiscal year 2002, despite the fact that decisions on definitions and data elements have not been finalized. We agree with this assessment. The letter further notes that double counting of payments on existing contracts could be a problem if GSA requires banks to include transactions over $2,500. As we discuss on page 11 of this report, according to a DOD official, this issue is not a concern; however, the working group, led by GSA, may want to clarify this issue in subsequent meetings. Bank of America’s comments appear in appendix IV. Austin-Tetra provided written comments, concurring with our findings and providing additional recommendations to GSA for obtaining socioeconomic data, such as providing incentives for merchants to submit socioeconomic data to their banks. The letter notes that these steps would come at an additional cost to the government. Austin-Tetra’s comments appear in appendix V. In oral comments, Office of Federal Procurement Policy officials concurred with our findings, stating that the report is balanced and accurately portrays the difficulties the government faces in collecting socioeconomic data on purchase card merchants. They suggested that we add more background information on the impetus for GSA's data collection effort. A Visa official provided oral comments. He concurred with our report, stating that it was enlightening, "on the mark," and helped to clarify some misconceptions. The official noted that there is a tradeoff between the desired level of accuracy and the cost of obtaining socioeconomic information on purchase card merchants. He said that, because the purchase card makes up a relatively small proportion of total procurement dollars, the level of granularity the government is requesting might not be worth the dollars needed to obtain this information on each merchant. Further, the official pointed out that there is little known about how the purchase card affects small businesses. Therefore, Visa's position is that care must be taken not to assume that the effects are negative. Visa also provided technical comments, which we incorporated as appropriate. In e-mail comments, a US Bank official generally concurred with our findings. However, he stated that our recommendations failed to account for the inherent challenges the government faces in its efforts to collect socioeconomic data on purchase card merchants. The official stated that the government contracted with the banks for a “commercially standard” purchase card program, but then sought to require a number of non- standard features from the contractors. He stressed that the issuing banks and payment card associations have very limited leverage to elicit this information from merchants. He suggested that GSA ask the banks to report only that information that is in their purview and expertise— namely, transaction data—and that GSA could then use government- owned or private sector services to match the transaction data against socioeconomic databases. Technical comments were incorporated as appropriate. Representatives from MasterCard and Citibank provided technical comments, which we incorporated as appropriate. As requested by your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to other interested congressional committees and the Secretary of Defense; the Director, Office of Management and Budget; the Administrator, GSA; the Administrator, SBA; and the Administrator, Office of Federal Procurement Policy. We are also sending copies to MasterCard, Visa, Citibank, Bank of America, US Bank, and Austin-Tetra. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-4841 or Michele Mackin, Assistant Director, at (202) 512-4309. Other major contributors to this report were Heather L. Barker, Lara L. Carreon, and Barbara A. Johnson. Appendix I: Scope and Methodology You requested that we include at least the following elements in our report: Determine the steps that federal agencies have taken to require that socioeconomic data be collected on purchase card use, including the standards and requirements established for such collection. Identify the information that federal agencies, especially the General Services Administration (GSA) and the Department of Defense (DOD), have collected for fiscal year 2001 on the socioeconomic status of purchase card merchants and the sources of such information. Identify and compile the information that credit card companies issuing purchase cards for use by federal agencies have collected for fiscal year 2001 on socioeconomic status of purchase card merchants and the sources of such information. Determine the standards and criteria under which the credit card companies collect socioeconomic information—including the definitions of “small business” that are used and the extent to which such definitions deviate from those promulgated by the Small Business Administration (SBA). Identify, to the extent possible, whether in each transaction purchase cards are being used to make payment on existing contracts or are distinct purchase card transactions. Each of these questions has been addressed in the report. You also asked us to verify the information collected by the banks and payment card associations by means of a survey. However, due to the lack of basic data on many purchase card merchants, we determined that such a survey would not be feasible. The challenges at each stage of the survey process create significant potential for error. For example, defining a universe of merchants from which to draw a sample would be difficult, as the amount of information available for each merchant varies widely. Because so little basic information on merchants exists, a representative sample cannot be ensured. The lack of contact information due to missing or inaccurate data would make it impossible to reach some of the merchants. Because of short life cycles, small businesses are generally more difficult to track. Given that response rates to surveys of small businesses have historically been low, high error rates can also be expected. Without basic information to describe the universe, it would be impossible to determine whether response bias exists. Further, the impact of the use of the purchase card on small businesses cannot be determined without prior years’ data. Finally, because merchant data is separate from transaction data, and there is no unique identifier that is consistent for all merchants, any analysis would involve development of new data management and analysis techniques—including extremely complex programs—to match merchant and transaction data. To assess GSA’s governmentwide efforts to collect data on the socioeconomic status of merchants, we reviewed (1) data reported to GSA by the banks and payment card associations for fiscal year 2001, (2) data provided to GSA for their internal purchase card program from Visa, and (3) MasterCard’s merchant file. Our analysis of electronic data files included statistical information on missing data, obvious errors, and duplication. We also reviewed relevant documents and legislation. We interviewed officials at GSA, SBA, DOD, Visa, MasterCard, the three largest banks contracting with GSA (Citibank, Bank of America, and US Bank), and a third party data source, Austin-Tetra. Because of the associations’ reliance on Austin-Tetra as a third party data source, we also assessed the reliability of its database and processes. We reviewed documentation, observed, and discussed Austin-Tetra’s business strategy and customers, the extensiveness of its database, the matching methodology (including both electronic and manual matching), the methodology for assigning socioeconomic characteristics to businesses, the procedures for source attribution, and their data assurance practices, including use of a data assurance group. To identify the challenges to the collection and reporting of socioeconomic data on merchants, we interviewed government officials from GSA’s purchase card program, SBA, DOD, and the Office of Federal Procurement Policy. We also interviewed industry officials from the three largest banks providing purchase card services; MasterCard, Visa and American Express; Austin Tetra; and a third party data processor, First Data Merchant Services. We also gathered information on small business and socioeconomic definitions from relevant guidance and legislation and discussions with SBA. We gathered information on sources of socioeconomic information and database matching processes from payment card associations and third party data sources. We conducted our review between March and September 2002 in accordance with generally accepted government auditing standards. Appendix II: Comments from the General Services Administration Appendix III: Comments from the Small Business Administration Appendix IV: Comments from the Bank of America Appendix V: Comments from Austin-Tetra
Government purchase cards have streamlined the process of acquiring goods and services by allowing employees to purchase directly from merchants rather than going through the regular procurement process. The government spent $13.8 billion using purchase cards in fiscal year 2001. However, the government does not know how purchase card spending impacts small businesses and other socioeconomic categories, such as woman-owned small businesses, and small disadvantaged businesses. Because of these uncertainties, the General Services Administration (GSA), which administers the purchase card program, has begun to collect socioeconomic data on merchants doing business with the federal government through purchase cards. This report assesses GSA's efforts and identifies the challenges to collecting and reporting this data. GSA's effort to collect socioeconomic data in fiscal year 2001 was ineffective because of incomplete, inconsistent, and, therefore, unreliable data gathered by banks and payment card associations on behalf of GSA. The data were inconsistent primarily because GSA did not precisely define criteria for the information it was seeking from the banks. Therefore, no meaningful conclusions can be drawn at this time about where agencies spend purchase card dollars or the effect of purchase cards on small businesses. Nevertheless, GSA has been working with the Small Business Administration, the Department of Defense, and the private sector to develop strategies to improve the data's reliability. By building on the lessons learned in its initial attempt to collect the data, GSA hopes to produce more reliable socioeconomic data for future fiscal years. We identified several challenges that prevent GSA from gathering data on 100 percent of the merchants doing business with the federal government. These challenges stem from the nature of the purchase card transaction processing system, which focuses on the data needed to ensure that the merchant is paid and the cardholder's account is charged. It is not designed to collect socioeconomic data for the government. Despite the challenges that prevent the collection of socioeconomic data on all purchase card merchants, well-defined criteria and consistent use of available data sources would provide decisionmakers with a clearer picture of the extent to which small businesses are receiving federal dollars through purchase cards.
Background Believing that high Seawolf submarine program costs would lead to inadequate force levels, the Department of Defense (DOD), in October 1991, established a requirement for a more affordable new attack submarine. According to the Navy, the NSSN’s estimated displacement weight will be about 7,100 tons, 2,000 tons less than the Seawolf’s. The NSSN’s missions include battlegroup support, covert strike warfare, covert intelligence, special warfare, covert mine warfare, antisubmarine warfare, and antisurface warfare operating in both open ocean and littoral (coastal) areas. In August 1992, the Defense Acquisition Board authorized the Navy to initiate concept exploration and definition (milestone 0) studies. A project office was established to set out the basic design and to develop an acquisition strategy that included the schedule of detail design and production. The Navy initially planned for the Board to approve the NSSN acquisition strategy in August 1993, as part of the milestone I decision to enter the demonstration and validation phase. However, the milestone I meeting slipped until January 1994. That meeting resulted in requesting the Navy to perform additional studies and analyses. These were completed and submitted to the Board. On August 1, 1994, the Board approved milestone I, and on August 18, 1994, issued an Acquisition Decision Memorandum. The memorandum directed the Navy to submit an updated documentation package for the Board’s approval within 60 days. The package is to include an acquisition strategy report, reflecting the Navy’s plan to initiate detail design and lead ship construction at Electric Boat. The Board also directed the Navy to initiate (1) advanced procurement of the lead ship’s nuclear reactor in fiscal year 1996 and (2) lead ship construction in fiscal year 1998. Further, the Board directed the Navy to update the submarine’s combat system acquisition strategy to reflect “a significant degree of private sector involvement in planning an open system architecture,” which contains commercially available hardware and software that meet broad industry standards. A September 1993 cost and operational effectiveness analysis prepared by the Center for Naval Analysis estimated the cost for comparison purposes of procuring 30 NSSNs and procuring 30 Seawolf submarines at 1 ship per year. In constant fiscal year 1994 dollars, the procurement cost for the NSSN was about $45 billion ($1.5 billion each) and for the Seawolf about $56 billion ($1.9 billion each). Applying Management Lessons May Reduce Costs and Avoid Schedule Delays By incorporating management lessons into the NSSN program, the Navy may avoid repeating many of the problems that caused Seawolf detail design and lead ship construction cost increases and schedule delays. In recognition of Seawolf problems, the NSSN project manager told us he intends, subject to DOD approval, to incorporate the five management lessons into the multibillion dollar NSSN program. However, because of the absence of a DOD approved acquisition strategy, the extent to which the NSSN acquisition strategy will include these lessons cannot be assessed now. Use a Single Shipyard to Design and Construct the Lead Submarine Under the split design/construction strategy used for the Seawolf program, Tenneco’s Newport News Shipbuilding and Drydock Company was responsible for the overall design and detail design of the submarine’s forward end, while General Dynamics’ Electric Boat Division was responsible for designing the submarine’s aft end and for constructing the SSN-21 and the SSN-22. The split design approach, with a requirement that design data be suitable for use at either shipyard, was originally instituted to instill competition for building 29 SSN-21 class submarines. This approach, which required additional time and resources as well as a high degree of coordination between the two shipbuilders, caused design and construction cost increases and additional time to approve design data and to resolve design drawing problems. Electric Boat, to construct the SSN-21, still had to convert Newport News Shipyard’s generic design data into Electric Boat specific work packages (instructions and materials). According to Seawolf program office officials, the two shipyards were unwilling to open their operations to one another. In addition, the Navy’s Seawolf program office occasionally had to mediate the resolution of design drawing problems between the two shipyards. Confusion between the two shipyards over design drawing delivery schedules was one factor that led to late delivery of design drawings to Electric Boat, the shipbuilder, in 1990 and in the first 6 months of 1991. A Seawolf program office official noted that the Seawolf program office has learned that having one shipbuilder design and construct the submarine can save time and money. The August NSSN Acquisition Decision Memorandum shows that one shipyard will design and build the lead NSSN. Delay Lead Ship Construction Until Design Matures The high degree of concurrent development and lead ship construction caused cost increases on Seawolf. The Navy awarded Newport News Shipbuilding the overall Seawolf detail design contract in April 1987. Construction of the first Seawolf, the SSN-21, started in October 1989, with delivery originally scheduled for May 1995. In some cases, this concurrency required developing and issuing drawings before system designs were fully mature. Although this approach provided the shipbuilder with design data earlier, it also caused a higher degree of design rework and, in some cases, construction rework. For example, the Navy’s data requirement lists developed during the early phase of Seawolf design were based, as was the case with prior submarine efforts, on providing the shipbuilder with engineering drawings as the basis for performing construction tasks. It was later discovered that because Seawolf submarines required a significantly greater level of modular construction and outfitting, new and more detailed sectional construction drawings were needed to initiate modular construction tasks. As a result, in June 1990, 8 months after SSN-21 construction started and about 37 months after detail design started, the Navy rebaselined and increased Newport News’ original $303 million detail design contract by $168 million. The rebaselining was for Newport News to prepare and to provide Electric Boat with more detail design data and incorporate final submarine specifications into the detail design. A September 1993 NSSN cost and operational effectiveness analysis found that an additional investment of between $105 million and $175 million in research and development funds to review the NSSN’s specifications and to complete design before lead ship construction contract award could reduce procurement costs by $141 million to $173 million per ship. Starting lead NSSN ship construction with a more mature detail design could result in a more cost-effective and efficient approach than that used under the Seawolf submarine program. In June 1994, the NSSN project manager stated that the Navy plans to begin lead NSSN construction when the detail design matures. However, lead ship construction will still begin in fiscal year 1998, despite the 1-year slip of milestone I. Under the current NSSN schedule, detail design is scheduled to begin in July 1995, with lead ship construction beginning about 27 months later in October 1997. However, we question whether the detail design will be mature enough to avoid repeating similar problems the Navy experienced with the Seawolf program. The Seawolf program experienced design and construction rework, significant cost increases, and schedule delays, despite a 30-month interval between starting detail design and lead ship (SSN-21) construction. Strengthen Specification Development and Approval Process Deficient government specifications for welding HY-100 strength steelhave increased SSN-21 construction costs and have delayed the submarine’s delivery from May 1995 until May 1996. In June 1991, Electric Boat experienced problems welding this new steel. As a result, Electric Boat notified the Navy that it had discovered weld cracks where two hull rings were joined together. Further investigation revealed additional unacceptable welds on the SSN-21’s pressure hull and on at least 21 government-and contractor-furnished items. By August 1991, all HY-100 welding had been stopped. The chemical composition of the welding metal, among other things, had resulted in cracking and unacceptable metal yield strengths and ductility. Ultimately, however, the welding cracks were traced to deficient government HY-100 welding specifications. Electric Boat and the Navy took corrective action; all welding resumed by December 1991. As a result of this problem, the Navy paid Electric Boat $77.8 million (in then-year dollars) to fix the cracks. It also caused a 1-year delay in the SSN-21’s delivery. During the determination of defective government specifications, the Commander, Naval Sea Systems Command, requested an independent assessment of the system for developing, preparing, and approving specifications. The assessment, completed in March 1992, showed that weaknesses in developing and qualifying specifications were caused by a lack of management priority and oversight, inadequate and untimely availability of funds, and a shortage of personnel needed to develop and update specifications. In addition, the assessment showed that only 39 percent of the specification parameters were supported by historical performance data and less than 5 percent of the parameters were supported by test data. The NSSN project manager said he plans to incorporate a review process that supports developing specifications. In addition, he indicated that the Navy plans to work with critical NSSN vendors early during design to coordinate specifications, including revisions, whenever necessary. Moreover, according to the NSSN project manager, the NSSN, to the extent possible, will incorporate existing systems and components from prior submarine programs and off-the-shelf, commercially available technology. Nevertheless, some existing systems may require varying degrees of reengineering for installation into the NSSN. Earlier Identification of Critical Components and Supply Vendors DOD has identified the decline of the submarine industrial base and the resulting uncertainty surrounding submarine component vendors as key factors contributing to Seawolf cost and schedule delays. Early identification of critical components and supply vendors can help determine whether to buy or manufacture some components in-house and can help reduce potential procurement problems. For the SSN-21, Electric Boat had to manufacture certain systems and components that it was originally planning to buy. This was due either to a lack of qualified vendors or the cost and schedule risks inherent in using a vendor for complex components that were under development (i.e., the weapons storage and handling system). Collectively, the absence of sufficient vendors contributed to Seawolf design and construction cost increases and schedule delays. The Navy’s March 1992 assessment showed an apparent incomplete coordination with industry and inadequate notification to and consultation with industry regarding major changes in Seawolf specifications as required by the Naval Sea System Command’s specification process. The assessment also showed that vendors were generally dissatisfied with government feedback to their comments during specification development and modification. According to the NSSN project manager, Electric Boat will identify and obtain critical suppliers earlier than was done on the Seawolf program. To improve coordination with vendors and to identify issues that can affect the NSSN’s design and construction, Electric Boat has assembled a team of 100 designers, construction trade people, and key vendors. However, the commitment to and the success of this effort will not be assessable until a later phase of design. Reduce Combat System Development Risks The Navy experienced problems developing the AN/BSY-1 combat system for the Improved SSN-688 class submarine and the AN/BSY-2 combat system for the Seawolf submarine. Because the time to correct AN/BSY-1 combat system design and development problems was insufficient, the AN/BSY-1 became the major factor in delays to the Improved SSN-688 construction program. These problems resulted in an additional $82 million in contract costs for five Improved SSN-688s. In addition, the first nine Improved SSN-688s equipped with AN/BSY-1 systems were delivered to the Navy an average of 17 months late. The AN/BSY-2 combat system scheduled for installation on the SSN-21 experienced cost increases and schedule delays. Changes to the system’s design caused a portion of the submarine to be redesigned at an additional cost. The Navy originally provided Newport News with general space and weight information for the system that the shipyard used to begin designing its portion of the Seawolf. The Navy later provided the shipyard with more specific information that caused considerable redesign of the submarine and increased design costs, according to Newport News. The Navy estimated in August 1994 that system development would cost $123 million more than the original contract target cost of $1 billion. Our November 1992 report showed that delivery of the system’s first phase capabilities (all hardware and the majority of software) had been delayed from its original November 1993 delivery to between late March and June 1994. Because the HY-100 welding crack problem delayed the submarine’s delivery 1 year, until May 1996, the Navy revised the AN/BSY-2 system’s first phase delivery to February 1995. According to a February 1994 Defense Acquisition Executive Summary prepared by the Seawolf program office, maintaining the AN/BSY-2 software development schedule to support lead ship delivery remains a challenge. The AN/BSY-2 hardware is complete and ready for delivery to Electric Boat. To reduce combat system cost, schedule, and technical risks the Navy encountered developing the systems for the Improved SSN-688 (AN/BSY-1) and Seawolf (AN/BSY-2) class submarines, the NSSN project manager stated that whenever possible, the NSSN’s combat system will be developed using what he termed an open systems architecture, which consists of commercial, off-the-shelf hardware and software. The Acquisition Decision Memorandum specifies a combat system acquisition strategy that involves “a significant degree of private sector involvement in planning an open system architecture.” Nevertheless, some existing systems may require varying degrees of reengineering for installation into the NSSN. Applying Technical Lessons May Save Millions of Dollars The NSSN project office compiled a database that identified about 1,350 primarily technical lessons from prior Navy programs. Electric Boat, Newport News, and other Navy organizations provided the input for this database. Personnel transferring into the NSSN project office from earlier submarine programs also provided some lessons. After consolidating duplicate lessons, the Navy reduced the database to 954 lessons. The NSSN project office’s evaluation process is ongoing, and new lessons are added to the database periodically. Of these 954 lessons, 290 had been approved for incorporation into the NSSN design as of May 1994. Examples of approved lessons are (1) centralizing the ship’s service hydraulic power plant, (2) simplifying the ship’s deck design, and (3) simplifying the ship’s pipe hangers. These three lessons are expected to save over $10 million in acquisition costs. The Navy estimates that NSSN savings from all approved lessons could range from $90 million to $100 million. However, because individual lessons’ costs can offset each other, savings must be assessed on a lesson-by-lesson basis. The potential exists for additional savings because the project office has not completed its review of almost 600 lessons. (See table 1 for status of the lessons.) The NSSN’s project manager noted that the Navy plans to incorporate the 290 technical lessons into the submarine’s preliminary design during the submarine’s demonstration and validation phase. Recommendation To allow an assessment of how the Navy will avoid a repetition of past problems, we recommend that the Secretary of Defense ensure that the formal NSSN acquisition strategy explicitly documents how the Navy is to address and incorporate the management and technical lessons from prior submarine programs. Agency Comments and Our Evaluation In commenting on a draft of this report, DOD generally concurred with the report and indicated that the Navy intended to apply the lessons learned from the prior programs. However, DOD did not believe it was necessary to explicitly document in a formal acquisition strategy how the Navy is to address and incorporate those lessons. DOD stated it was confident the current process provides adequate emphasis on lessons learned from prior programs. After considering DOD’s position, we continue to believe that implementation of our recommendation is warranted. This is a multibillion dollar program; the lessons that should have been learned have already been identified; therefore, it seems that documenting how they are to be incorporated is merely a completion of the cycle—a way of better assuring that the Navy avoids a repetition of cost and scheduling difficulties. Further, we believe such documentation will serve as a valuable tool for guiding the implementation of the program. DOD comments are presented in their entirety in appendix I. DOD’s suggestions for improving the clarity of the report have been incorporated in the text where appropriate. Scope and Methodology To determine the types of experiences the Navy should apply to the NSSN effort, we reviewed our prior products on the SSN-21, SSN-688, Trident, the combat systems, and other organizations’ reports on lessons learned. We held discussions with Navy program officials for the Seawolf program, the AN/BSY-1 and the AN/BSY-2 combat system programs, and the SSN-688 and the Trident submarine programs. We held discussions with the Supervisors of Shipbuilding in Groton, Connecticut, and with Naval Undersea Warfare Center officials in Newport, Rhode Island. We also held discussions with Navy officials responsible for planning the NSSN’s development in Washington, D.C. We reviewed the NSSN project office’s database of technical lessons learned experiences or suggestions and reviewed and analyzed Navy studies and assessments. We obtained, reviewed, and assessed suggestions provided by Electric Boat, Groton, Connecticut; and Newport News Shipbuilding, Newport News, Virginia. Electric Boat provided more detailed information on views on selected lessons learned that should be applied to the NSSN program, but Newport News Shipbuilding did not because of other business pressures. We conducted our review from June 1993 to June 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Navy and to congressional oversight committees. We will also make copies available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. Comments From the Department of Defense Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Boston Regional Office List of SSN-21 GAO-Related Products Since 1991 GAO has performed work on the Seawolf program since 1985. The following chronology presents products issued since 1991. Navy Ships: Seawolf Cost Increases and Schedule Delays Continue (GAO/NSIAD-94-201BR, June 30, 1994). Navy Ships: Problems Continue to Plague the Seawolf Submarine Program (GAO/NSIAD-93-171, Aug. 4, 1993). Navy Ships: Plans and Anticipated Liabilities to Terminate SSN-21 Program Contracts (GAO/NSIAD-93-32BR, Nov. 27, 1992). Navy Ships: Status of SSN-21 Design and Lead Ship Construction Program (GAO/NSIAD-93-34, Nov. 17, 1992). SSN-21, Seawolf Contract Terminations (GAO/NSIAD-93-41R, Nov. 6, 1992). Navy Shipbuilding: Effects of Reduced SSN-21 Procurement Rates on Industrial Base and Cost of Program (GAO/NSIAD-92-140, Apr. 8, 1992). Submarine Combat System: BSY-2 Development Risks Must Be Addressed and Production Schedule Reassessed (GAO/IMTEC-91-30, Aug. 22, 1991). Submarine Combat System: Status of Selected Technical Risks in the BSY-2 Development (GAO/IMTEC-91-46BR, May 24, 1991). A May 24, 1991, letter to Representative Herbert H. Bateman discussing cost projections of several SSN-21 procurement scenarios. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's plans to incorporate lessons learned from prior submarine programs into the design and construction of the NSSN, a new class of nuclear-powered attack submarine. GAO found that: (1) by incorporating management lessons from prior submarine construction programs, the Navy could reduce its costs and avoid the schedule delays that it has experienced in the Seawolf SSN-21 program; (2) the NSSN project manager intends to incorporate lessons learned from the Seawolf SSN-21 program into the NSSN program acquisition strategy, but the extent to which these lessons will be applied cannot be assessed until the Department of Defense (DOD) approves the strategy and makes it available to the public for evaluation; (3) the Navy estimates that NSSN cost savings could range from $90 million to $100 million; and (4) the DOD acquisition strategy should explicitly address how the Navy will avoid repeating the problems of prior construction programs.
Background Emergency communications interoperability refers to the ability of first responders and emergency preparedness and management officials to use their radios and other equipment to communicate with each other across agencies and jurisdictions when needed and as authorized, as shown in our hypothetical example of response to a fire in a federal building in figure 1. According to DHS’s NECP, emergency communications interoperability is one of the components of an effective emergency preparedness and management structure. DHS’s Interoperability Continuum states that first responders need interoperability during day-to-day incidents as well as large scale emergencies. For example, interoperability is important in responding to localized emergency incidents, such as a vehicle collision on an interstate highway, and in regional incident management, such as in responding to a disaster. It also facilitates first responder communications during planned events such as sporting events, State of the Union addresses, or presidential Inaugurations that involve multiple responding agencies. The NCR is a legally-designated geographic region that includes the District, and local jurisdictions in the state of Maryland and the Commonwealth of Virginia, as shown in figure 2. There is no single operational authority for emergency response in the NCR because the responsibilities reside with state and local jurisdictions. Instead, the NCR is supported by a network of committees that were created by the jurisdictions and include representatives from the District, Maryland, Virginia, as well as the federal government, and private and nonprofit entities such as MWCOG. These committees work together to improve the region’s ability to prepare for, respond to, and recover from hazards. For example, key committees include: The Senior Policy Group which is comprised of senior emergency management officials from the District, Maryland, Virginia, and the ONCRC and coordinates efforts to increase regional preparedness, mitigation, and response capabilities in the NCR. The Senior Policy Group also oversees the allocation and implementation of UASI funding and determines priority actions for increasing the region’s preparedness and response capabilities as well as reducing vulnerability to terrorist attacks. The Emergency Preparedness Council (Council) which is an advisory body of representatives from the District, Maryland, Virginia, and private, and non-profit entities that reports to the MWCOG Board of Directors. This Council provides oversight of the regional emergency coordination and strategic plans to identify and address gaps in readiness in the NCR. It also is the federally-required working group with oversight responsibility for the UASI grant program. In this capacity, the Council coordinates with the Senior Policy Group with the aim of ensuring that grant funds are used to support projects that will build an enhanced and sustainable capacity to prevent, protect against, and recover from threats or acts of terrorism. The UASI grant funds can be used to invest in technology, equipment, training, exercises, and management. GAO, Emergency Communications: Various Challenges Likely to Slow Implementation of a Public Safety Broadband Network, GAO-12-343 (Washington, D.C.; Feb. 22, 2012). handheld portable radios (typically carried by emergency responders); mobile radios (often located in vehicles); base station radios (located in a fixed position such as a dispatch repeaters (towers that increase the communication range of handheld portable radios and base station radios by retransmitting received signals). LMR systems are designed to provide rapid voice call set-up and group- calling capabilities. Group calling is important for first responders because it enables one individual to simultaneously communicate to every member of a group, such as all firefighters in the interior of a burning building. According to DHS’s 2014 NECP, because LMR systems will remain the primary tool for mission critical voice communications for many years to come, they have to meet a high standard for reliability, redundancy, capacity, and flexibility. Thus, according to the 2014 NECP, for many public safety agencies maintaining their LMR systems and improving interoperability continues to be their top communications priority. According to the Homeland Security Act of 2002 (HSA), the ONCRC, located within DHS’s Federal Emergency Management Agency (FEMA), is required, among other things, to: coordinate the activities of DHS relating to the NCR; coordinate on terrorism preparedness with federal, state, local, and regional agencies, and private sector entities in the NCR to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities; serve as a liaison between the federal government and state, local, and regional authorities, and private sector entities in the NCR to facilitate access to federal grants and other programs; and provide state, local, and regional authorities in the NCR with regular information, research, and technical support to assist the efforts of state, local, and regional authorities in securing the homeland. The HSA also requires that the ONCRC to submit an annual report to Congress that includes: identifying resources required to fully implement homeland security efforts in the NCR; an assessment of the progress made by the entities in implementing homeland security efforts (including emergency communications interoperability); and recommendations to Congress regarding the additional resources needed to fully implement homeland security efforts. The Director of the ONCRC is also a member of the NCR’s Senior Policy Group. In fiscal year 2015, ONCRC’s budget was $3.4 million, and it is currently authorized at 20 full- time equivalent staff in Washington, D.C. DHS’s Office of Emergency Communications (OEC) was, established pursuant to statute in 2006, in response to the September 11th terrorist attacks and the 2005 Hurricane Katrina. The OEC’s mission is to support and promote communications systems used by first responders and government officials to keep America safe, secure, and resilient. Among other things, OEC is responsible for leading the nation’s interoperable public safety, national security, and emergency-preparedness, and communications efforts. The OEC also is responsible for providing training, workshops, and guidance to help federal, state and local agencies and the private sector develop their emergency communications efforts. ONCRC Has Taken Various Actions to Help Improve Emergency Communications Interoperability in the NCR The ONCRC has worked with both state and local entities and other DHS components on various efforts aimed at improving emergency preparedness in the NCR, including emergency communications interoperability. In particular, the ONCRC’s 2013 report to Congress notes that one of its principal mechanisms to assist state and local agencies in the region with emergency communications interoperability is through its participation on several committees that are involved in planning and carrying out efforts to build and sustain preparedness capabilities. For example, ONCRC staff helped revise the NCR’s 2013 Homeland Security Strategic Plan (NCR Strategic Plan). The NCR Strategic Plan represents the region’s strategy for improving preparedness to address various hazards. According to the Director of ONCRC, the ONCRC works with state and local agencies in the NCR to support their activities aimed at achieving the plan’s goals. One of the goals of the plan is to ensure interoperable communications capabilities. The plan identified a number of NCR’s initiatives to achieve this goal including: ensuring communications interoperability across agencies, managing and coordinating radio upgrades across jurisdictions, maintaining a cache of extra radios, conducting training to improve the ability of all NCR partners to access and use communications systems effectively, and encourage participation in biannual communication exercises that test all regional communication platforms. In addition, the ONCRC has worked with NCR agencies to continue developing the National Capital Region Network (NCRNet) which is a secure, non-commercial, fiber optic network. As part of the interoperable communications infrastructure, the NCRNet is designed to address the NCR’s need for a web-based capability for secure data communications and should allow for more efficient, flexible, and secure data exchange particularly among state and local jurisdictions within the NCR. According to the ONCRC’s 2014 report to Congress, most of the local jurisdictions in the region are connected via the NCRNet, but other NCR agencies are not. In comments on a draft of this report, ONCRC officials said that, as of December 2015, all of the jurisdictions in NCR are connected to NCRNet and efforts are underway to include federal agencies in the NCRNet. According to the Director of ONCRC, although NCR agencies and entities are not required to implement the NCR’s Strategic Plan, they are committed to implementing it because they helped develop the plan and are stewards of public trust and resources. In its 2014 and 2015 reports to Congress, the ONCRC states that, as a result of the NCR’s Strategic Plan, state and local agencies have a framework for sustaining current emergency communications capabilities, such as interoperability, and building new ones. The reports also noted that achieving the plan’s goals will improve the region’s preparedness to address critical risk in the NCR. As noted previously, one of the ONCRC’s responsibilities is to serve as a liaison with entities in the NCR to facilitate access to federal grants. The ONCRC officials told us that the ONCRC (as a member of the Senior Policy Group) has collaborated with the NCR’s Emergency Preparedness Council to facilitate state and local agencies access to the DHS’s UASI grant program. The UASI grant program is the primary source of federal homeland security funding for the NCR. In fiscal year 2014, DHS allocated $53 million through the UASI grant program to the NCR to enhance the region’s homeland security and preparedness capabilities. Almost $7 million of the $53 million was to fund projects aimed at the NCR Strategic Plan’s goal to ensure interoperable communications capabilities, such as purchasing radios and other equipment as well as developing a strategic plan for radio encryption. Regarding other types of grants for homeland security, we found in 2013 that officials in the NCR do not have access to comprehensive information on federal-funding sources for homeland security and emergency-management capabilities other than UASI grants and recommended that the ONCRC collect and maintain information on all such federal funding sources for NCR jurisdictions. Although the ONCRC initially agreed with this recommendation, officials currently state they do not plan to implement it, in part, because it is the responsibility of the NCR’s Project Management Office. The ONCRC also helps support emergency communications interoperability in the NCR by coordinating with FEMA’s Region III’s Regional Emergency Communications Coordination Working Groups to share information with NCR agencies about interoperability, including lessons learned and best practices such as from the 2015 papal visit, the 2010 earthquake, and snowstorms in the NCR. In addition, the ONCRC staff said that, in 2008, when the FCC directed reconfiguration of the 800 MHz band radios—used by police, firefighters, and emergency service personnel— the ONCRC assisted federal agencies in the NCR with ensuring that their 800 MHz band radios were included as part of the local jurisdictions’ reconfiguration plans. Further, according to ONCRC officials, FEMA maintains a cache of extra radios that can be distributed during an emergency to other NCR agencies’ first responders whose radios may not be interoperable with each other. For example, when the U.S. Capitol Police requested additional radios during the 2013 State of the Union address, the ONCRC issued the radios to them, according to ONCRC officials. The ONCRC has coordinated with a number of other DHS’s components in their efforts to improve emergency communications interoperability across the country, including in the NCR, according to ONCRC officials. For example, officials noted that the ONCRC worked closely with DHS’ OEC in the development of DHS’s 2014 National Emergency Communications Plan (NECP) which establishes a national strategy for improving emergency communications (including interoperability) across all levels of government and increasing coordination across the emergency response community. In the 2014 NECP, the Secretary of Homeland Security, states that ensuring interoperable communications among responders during all threats and hazards is paramount to the safety and security of all Americans. The 2014 NECP aims to improve communications capabilities of first responders’ at all levels of the government and states that one of DHS’s top priorities is to identify and prioritize areas for improvement in first responders LMR systems. To do so, the NECP has five goals that include enhancing decision making, coordination, and planning for emergency communications and improving first responders’ ability to communicate through training and exercise programs. The ONCRC also coordinated with OEC to develop the Interoperability Continuum (Continuum), which is designed to assist first responders and policy makers across the country with planning and implementing interoperability solutions for emergency communications. For example, the Continuum identifies the five elements that public safety agencies should address to achieve interoperability. The five elements are to: establish a governance structure; develop standard operating procedures; acquire and implement technology that meets user needs; provide training and exercise programs; and ensure that interoperable communications technologies are used. The Interoperability Continuum can also be used by jurisdictions to track progress in strengthening interoperable communications. While the Interoperability Continuum is guidance, emergency management officials from the District, Maryland, and Virginia have incorporated the Interoperability Continuum’s five elements in their 2013 Statewide Communications Interoperability Plans (SCIP). Specifically, the District, Maryland, and Virginia’s 2013 SCIP have aligned their strategic goals for interoperability—governance, standard operating procedures, technology, training and exercises, and usage— with the Interoperability Continuum’s five elements. The SCIPs identified several initiatives aimed at achieving these goals. For example, regarding governance, Maryland states that one of its initiatives is to codify its existing governance structure through legislation. The District and Virginia plans note that refining the purpose and membership of their statewide interoperability executive committees is one of their initiatives related to governance. To ensure that first responders’ technological needs are met the District, Maryland, and Virginia plan to add nationwide interoperability channels, conduct vulnerability assessments of critical communications infrastructure, and maintain and upgrade existing technologies. Furthermore, the ONCRC worked closely with OEC on developing emergency communications grants guidance, according to ONCRC officials. The guidance aims to provide state and local grant recipients with information on emergency communications policies, eligible costs, best practices, and technical standards for investing federal funds in emergency communications projects. The guidance also recognizes the need to sustain current LMR systems and encourages grant recipients to participate, support, and invest in planning activities that will help them prepare for deployment of new emergency communications systems or technologies. For example, grant recipients should continue developing plans and standard operating procedures, conducting training and exercises, and investing in standards-based equipment to sustain LMR capabilities. Overall, emergency management officials in the District, Maryland, and Virginia we spoke to were generally satisfied with ONCRC’s efforts to coordinate with them to help ensure interoperability of emergency communications. However, these officials also said that achieving interoperability of emergency communications among all NCR agencies and entities when needed will continue to be a challenge, in part, because of the size and complexity of the region. As noted previously, the ONCRC is required to report to Congress annually on the progress of emergency preparedness in the region, including the state of emergency communications interoperability. In the 2014 and 2015 reports to Congress, the ONCRC states that achieving the goals in the NCR’s Strategic Plan will improve the region’s preparedness to address risk in the NCR. However, these reports do not include performance measures that would indicate the extent to which those goals have been achieved. We found in 2013 that, while NCR agencies had taken steps to develop measures to assess the region’s performance in improving emergency preparedness, more could be done to assist in these efforts. We recommended that the ONCRC assist NCR agencies in developing performance measures to better assess the implementation of the NCR’s Strategic Plan. ONCRC agreed with this recommendation and ONCRC officials told us that they are working on implementing it but did not provide a timeframe for completion. In the interim, without performance measures as we recommended for monitoring and assessing regional efforts to enhance interoperability, ONCRC’s ability to provide Congress with comprehensive information on the extent to which first responders in the NCR have emergency communications interoperability when needed and authorized is hampered. For example, in the 2013 through 2015 reports to Congress, the ONCRC reports that NCR agencies continue to make progress towards achieving interoperable communications, but does not report on the extent to which emergency communications interoperability exists in the region. The ONCRC’s Ability to Coordinate with Federal Agencies to Help Improve Emergency Preparedness, Including Communications Interoperability, in the NCR is Currently Limited As discussed in the previous section, the ONCRC coordinates with state and local agencies in the NCR primarily through its participation on NCR committees including the Senior Policy Group and the Emergency Preparedness Council. However, the ONCRC currently does not have a formal mechanism in place to coordinate with federal agencies and is making some effort to improve how it coordinates with them. According to ONCRC’s 2015 report to Congress, over 270 federal agencies exist in the NCR, and ONCRC officials are trying to identify those that should be involved in these coordination efforts. For example, as demonstrated in examples cited previously, federal entities—such as DHS, the Navy, and the U.S. Capitol Police—have responsibilities related to emergency preparedness and incident response in the NCR. We noted in 2012 that many of the meaningful results that the federal government seeks to achieve—such as those related to providing homeland security—require the coordinated efforts of more than one federal agency. From the establishment of the ONCRC in 2002 through 2014, the Joint Federal Committee (JFC) was the ONCRC’s primary means of coordinating federal efforts with state and local agencies in the NCR. The JFC’s charter states that it was established to provide a forum for policy discussions, information sharing, and issue resolution regarding federal preparedness activities in the NCR. It was chaired by the Director of ONCRC who was to preside over all meetings, ensure the development of a meeting agenda, and ensure that the JFC’s responsibilities and activities were carried out. Membership in the JFC was open to all federal departments and agencies with offices in the NCR and meetings were to be held at least quarterly. However, the ONCRC officials said that the JFC has not convened since 2014. In its 2014 report to Congress, the ONCRC stated that the JFC is not operating effectively and efficiently to accomplish its mission and would be restructured. The report also noted that federal agencies commented that the JFC lacked the authority, organization, and ability to focus on specific issues to be effective. According to ONCRC officials, during its existence, the JFC focused on information sharing, and they plan to restructure it into a federal coordinating body that will assist in the interagency and intergovernmental coordination of homeland security within the NCR. In addition, they said that once restructured, the JFC would produce guidance (such as standard operating procedures) and identify lessons learned. The Director of the ONCRC told us that in the absence of the JFC, the ONCRC has held informal meetings with some federal agencies in the NCR, such as the Secret Service during the 2015 papal visit, but said that this was not a sufficient mechanism for coordinating with federal agencies in the NCR. The Director also stated there is value to having a coordination mechanism in the NCR, such as the JFC, in part, because it would help federal agencies in the NCR to better coordinate the federal response to future incidents in the region. The Director said that the ONCRC was in the early stages of restructuring the JFC and estimated that it should be reconvened by 2017. However, according to ONCRC officials, written plans or documents for the specific elements of the restructuring were not available. For many years, we have reported about the importance of collaboration between and among federal agencies. For example, we have noted that interagency mechanisms or strategies to coordinate programs that address crosscutting issues may reduce potentially duplicative, overlapping, and fragmented efforts. When the JFC existed, it did not fully operate in a manner that was consistent with key considerations for implementing interagency collaboration mechanisms that we have previously identified. According to our prior work on collaboration, agencies can strengthen their commitment to working collaboratively by having written agreements. Our work has also shown that when implementing collaborative mechanisms, clearly articulating agency roles and responsibilities and how agencies will collaborate, including how they will operate across agency boundaries, into a written document can be a powerful tool for collaboration and doing so can provide a clear understanding of those roles and responsibilities. Our body of work has also shown that written agreements are most effective when they are regularly updated and monitored. The JFC did not have a written agreement that defined the general processes and procedures it used to carry out its responsibilities. Instead, the JFC had a high level charter which was last updated in 2009. The charter stated that membership shall be open to all departments and agencies of the three branches of the federal government. However, the charter did not provide information on (1) the roles, responsibilities, structure, and functions of its members; or (2) how its members were to work together across agency boundaries. It may be difficult for the JFC, once restructured, to enhance interagency understanding, coordination, and collaboration among federal agencies in the NCR without such a written agreement. Addressing the above key considerations for implementing collaborative mechanisms in the planned restructuring of the JFC could provide greater clarity to its members on their roles and responsibilities, particularly when responding to incidents, as well as improve the ONCRC’s ability to carry out its statutory responsibilities for coordinating federal homeland security and emergency-management activities in the NCR. Conclusions The ONCRC’s statutory responsibility for overseeing and coordinating emergency preparedness, including emergency communications interoperability in the NCR is important for helping to ensure that federal, state, and local agencies can communicate and share information with each other across agencies and jurisdictions when needed and as authorized. However, until the ONCRC implements our recommendation—to assist NCR agencies in developing performance measures to better assess the implementation of the NCR Strategic Plan—the ONCRC has limited ability to monitor and report to Congress on progress in achieving emergency communications interoperability in the region. Moreover, the ONCRC’s primary means of coordinating with federal agencies in the NCR (the JFC) has not convened since 2014 and was not operating in a manner that is fully consistent with some of our key considerations for implementing interagency collaborative mechanisms, such as clearly articulating roles and responsibilities into a written document. According to the Director of ONCRC, efforts are underway to restructure the JFC. Incorporating these key considerations would be an important step toward improving interagency collaboration, particularly among federal agencies, in the NCR. In particular, revising the JFC’s charter to describe in general how the JFC will operate and, in particular, each member’s role and responsibilities would better enable the JFC to assist the ONCRC with coordinating federal agencies’ efforts to help enhance emergency preparedness, including interoperability in the NCR. Recommendation for Executive Action To further build on the efforts to improve emergency communications interoperability in the NCR, we recommend that the FEMA Administrator direct the Director of ONCRC to take the following action: as part of its efforts to restructure the JFC, clearly articulate in a written agreement the roles and responsibilities of the participating agencies and specify how these agencies are to work together across agency boundaries. Agency Comments We provided a draft of this report to DHS for comment. On March 2, 2016, DHS provided written comments, which are reprinted in appendix I and provided technical comments, which we incorporated as appropriate. DHS concurred with our recommendation and described action under way to address it. Specifically, the ONCRC has formed a temporary working group comprised of volunteers from various federal agencies. That group, among other things, will be responsible for determining the JFC’s mission, functions, deliverables, and membership requirements as well as drafting a new charter to clearly state members’ roles and responsibilities. The ONCRC estimated a completion date of March 31, 2017. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Administrator of FEMA, the Director of ONCRC, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www/gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: DHS Management Response Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tammy Conquest, Assistant Director; Melissa Bodeau; Antoine Clark; Sharon Dyer; Rich Hung; Sara Ann Moessbauer; Josh Ormond; Cheryl Peterson; and Lisa Shibata made key contributions to this report.
The NCR is considered at high risk for various threats and hazards. Federal, state, and local agencies in the NCR continue to face challenges with emergency communications interoperability—that is, the ability to use radios to communicate across entities when needed. The federal government has taken actions to improve interoperability in the NCR including allocating almost $720 million through a DHS grant program to enhance regional preparedness since fiscal year 2002, and establishing the ONCRC to coordinate NCR entities on homeland security activities, including interoperability. GAO was asked to review federal efforts to improve emergency communications interoperability in the NCR. This report examines (1) actions the ONCRC has taken to help improve emergency communications interoperability in the NCR and (2) status of the ONCRC's efforts to coordinate with federal agencies to help improve emergency preparedness in the NCR, including communications interoperability. GAO reviewed documentation from the ONCRC and interviewed DHS officials and emergency managers from the District of Columbia, Maryland, and Virginia. The Office of National Capital Region Coordination (ONCRC), within the Department of Homeland Security (DHS), has taken various actions, mainly through coordination with state and local agencies, to help improve emergency communications interoperability in the National Capital Region (NCR), a legally-designated area including Washington, D.C. and nearby parts of Virginia and Maryland. For example: The ONCRC participates in several committees that are involved in planning and carrying out efforts to build preparedness and response capabilities of the region. In particular, the Director of the ONCRC is a member of the NCR's Senior Policy Group, which coordinates these efforts. The ONCRC staff helped develop the NCR's 2013 Homeland Security Strategic Plan . One of the goals of the plan is to ensure interoperable communications capabilities. The Strategic Plan identified a number of NCR initiatives to achieve this goal, including supporting the establishment and maintenance of radio interoperability and managing and coordinating radio upgrades across jurisdictions. As part of the responsibility to serve as a liaison with entities in the NCR, the ONCRC has collaborated with the NCR's Emergency Preparedness Council (an NCR advisory body) to facilitate state and local agencies access to the DHS's Urban Area Security Initiative grant program—the primary source of federal homeland security funding for the NCR. In fiscal year 2014, DHS allocated $53 million in grant funding to the NCR to enhance the region's homeland security and preparedness capabilities. Almost $7 million of this amount was to fund activities, such as purchasing radios and other equipment, aimed at achieving the NCR Strategic Plan's goal to ensure interoperable communications capabilities. A key role of the ONCRC is to coordinate with federal, state, and local NCR entities on emergency preparedness and homeland security activities. However, the ONCRC currently does not have a formal mechanism in place to coordinate with federal agencies. From 2002 through 2014, the Joint Federal Committee (JFC) was the ONCRC's primary means of coordinating with federal agencies in the NCR. The ONCRC has not convened the JFC since 2014 and plans to restructure it. Officials explained that the JFC was not efficient and effective as a coordinating body and that they plan to strengthen its coordination capabilities. However, written plans were not available. When the JFC existed, its operation was not fully aligned with interagency collaboration mechanisms that GAO has identified. In particular, the JFC's charter did not specify the roles and responsibilities of participating agencies or how they were to work together across agency boundaries. Addressing these interagency collaborative mechanisms in the planned restructuring of the JFC could provide greater clarity on roles and responsibilities and enhance its ability to coordinate federal efforts in the region.
Background The LCS consists of two distinct parts: (1) a seaframe, which is essentially the ship itself, and (2) a mission package, which is an interchangeable set of sensors, weapons, aircraft, surface craft, and subsurface vehicles carried on and deployed from the seaframe to perform three different primary missions: mine countermeasures (MCM), SUW, and anti-submarine warfare (ASW). LCS was initially developed to provide a lower-cost surface combatant with a smaller crew than other ships and modest combat capabilities in focused areas, compared to higher cost multi-mission surface combatants like destroyers. LCS is envisioned to operate in both littoral waters and the deep ocean in all theaters of operation. Early in the program, the Navy decided to forgo a number of traditional ship requirements in order to help reduce the costs and the weight and size of LCS, which in turn made the ship less robust in terms of weaponry and survivability than other surface combatants. Those decisions were validated by the Department of Defense’s (DOD) Joint Requirements Oversight Council. Both LCS variants initially leveraged commercial ship designs, and were modified in accordance with established sets of technical criteria, called rules, that were developed by the American Bureau of Shipping (ABS). ABS is a not-for-profit ship classification society that provides independent technical assessments to ensure vessels are built in accordance with the applicable rules, and can also conduct periodic surveying of in-service ships. ABS was under contract with the Navy to provide technical expertise on the LCS program and to develop rules used in the design of LCS, but this contract ended in June 2012. LCS Acquisition The Navy awarded contracts to two contractor teams that developed designs for the LCS seaframe reflecting different solutions to the same set of requirements. The Navy is procuring two distinct variants: a steel monohull design with an aluminum superstructure called the Freedom variant, and an all-aluminum trimaran design called the Independence variant. The Freedom variant has odd hull numbers and is being built at Marinette Marine in Marinette, Wisconsin. The Independence variant has even hull numbers and is being built at Austal USA in Mobile, Alabama. The Navy has contracted for 24 seaframes with equal numbers of both variants and has taken delivery of four to date. Twenty seaframes are currently covered under block buy contracts and the Navy anticipates funding construction of seaframes through 2016, with deliveries continuing until 2020. The Navy plans to contract for two additional ships in fiscal year 2016 and plans to award further contracts for three LCS seaframes in both 2017 and 2018—though the Navy’s acquisition strategy for these years is still in development. Table 1 shows the status of the LCS seaframe procurement. The Navy requested $1.4 billion for three LCS seaframes in its fiscal year 2016 budget request. The Navy’s plans to begin development and procurement of the new modified LCS are not yet known, although the Navy has stated that its goal is to begin procurement of the lead ships in 2019. Each LCS will be capable of carrying an SUW, ASW, or MCM mission package, as required by the circumstances. The mission packages are being developed in increments; the Navy plans to develop four SUW increments, four MCM increments, and one ASW increment. The mission packages will provide the bulk of the combat capability or lethality for the ship. The Navy has 10 mission packages in its inventory and currently plans to buy 64 mission packages. According to Navy officials, the recent decision to develop a modified LCS has not changed the current end quantity of mission package purchases. Survivability Survivability is the ability of a ship to avoid, withstand, or recover from damage. It consists of three elements: susceptibility, vulnerability, and recoverability. Susceptibility is the degree to which a ship can be targeted and engaged by threat weapons. Some ways of improving a ship’s susceptibility include avoiding or defeating a threat by using a combination of tactics, signature reduction, countermeasures, and self-defense systems. LCS uses speed, maneuverability, modern defensive weapons, organic systems (e.g., 57mm gun), and sensors to counter surface, air, and underwater threats. Vulnerability is a measure of a ship’s ability to withstand initial damage effects from threat weapons and to continue to perform its primary warfare mission areas. LCS design uses three different vulnerability scenarios that, dependent on the severity of the damage, allow it to exit the battle area under its own power; and continue to perform its primary mission; conduct an orderly abandon ship. Recoverability is a measure of a ship’s ability to take emergency action to contain and control damage, prevent loss of a damaged ship, minimize personnel casualties, and restore and sustain primary mission capabilities. The LCS seaframe provides most of the survivability features for the crew, including damage control and safety systems. For example, LCS has three redundant firefighting systems. The Navy specified LCS survivability to be greater than that of auxiliary ships, which have a comparably low survivability level, but less than that of frigates and amphibious assault ships—as shown in table 2. According to Navy officials, the Navy designed LCS to what they refer to as a Level 1+ standard, meaning it had additional features beyond those of other Level 1 ships, including tailored survivability requirements for underwater shock and limited fragmentation and bullet armor; and improved ability to withstand flooding after a damage event. Lethality Lethality is ability of a weapon system—in this case LCS—to damage or destroy threats, including an enemy ship, aircraft, or missile. Lethality enables survivability because if LCS is able to sink or damage an approaching enemy vessel before it attacks, that enemy vessel may be unable to fire at LCS. The LCS CDD defines requirements related to lethality and identifies specific threats that LCS is expected to be able to destroy and the range at which it should do so. The seaframes provide sensors and communications systems needed for ship operations and self-defense weapons for both the SUW mission and defense against enemy aircraft and missiles, called anti-air warfare. The SUW mission package augments the ship’s lethality by adding two 30mm gun mounts and an armed helicopter. Eventually, a surface-to-surface missile is planned to be added to the third increment of this mission package. Table 3 depicts the combat system equipment carried on the seaframes and the weapon systems that are added with the mission packages. LCS was designed to be able to address the threat of small boats. Figure 1 depicts examples of two types of small boats. Test Events The Navy uses several types of testing to evaluate the weapon systems it develops, as required by DOD acquisition policy and statute. Developmental testing is typically sponsored by the program office, is often conducted in conjunction with the contractors, and is used to assess whether the system design is satisfactory and meets technical specifications. Developmental test events, such as combat system ship qualification trials, allow the Navy to verify and validate combat and weapon system performance. Technical evaluation is a testing activity used to assess the readiness of the system for operational testing. Operational testing includes live-fire testing, and is used to determine that the system can effectively execute its mission in an operational environment when operated by typical sailors against relevant threats. Operational testing is required by statute. The Navy has used a combination of developmental and operational testing and modeling and simulation to demonstrate the survivability and lethality of LCS. DOD granted the LCS program a waiver to relieve the Navy of the requirement to do full-scale survivability testing. Such waivers are common in shipbuilding, as it is unrealistic to use a production ship and a live test to assess certain types of damage—for example, how fire spreads throughout the ship. DOT&E—the agency responsible for approving test plans—approved a modified live fire test and evaluation plan that takes advantage of testing on similar components and utilizes historical combat data. In place of live testing, the Navy has used a number of surrogate tests and modeling and simulation to try to retire risk in these areas. Surrogate testing uses decommissioned ships (where available) or representative portions of ship structure, and subjects them to damage similar to what might be caused by threat weapons. These tests help inform and validate the results of computer-based modeling and simulation. The Navy also conducts test events to demonstrate the effectiveness of the ship’s weapon systems and sensors, and it has a test plan to demonstrate the effectiveness of each mission package increment on each seaframe variant. Our Prior Recommendations We have reported extensively on the risks of proceeding with LCS procurements without the requisite knowledge provided through adequate testing. In 2013 and 2014, we concluded that the Navy continued to make further investment decisions in the seaframes and mission packages with an absence of key information. In these reports, we identified that until the Navy completes operational testing, the Navy could invest approximately $34 billion (in 2010 dollars) for up to 52 seaframes and 64 mission packages that may not provide a militarily useful capability. We also found in 2013 and 2014 that unknowns persist with the Independence variant given that it had not completed the same testing as the Freedom variant. We recommended that the Navy re-evaluate its business case for LCS and conduct a number of operational test events on both variants prior to making a decision to contract for more ships, including the following: Deploying to a forward overseas location. The Freedom variant has deployed overseas twice; the Independence variant has not yet deployed. Completing rough water, ship shock, and total ship survivability testing. Both variants have now completed rough water trials; the Freedom variant completed total ship survivability testing in 2014, but the Independence variant has not yet conducted this testing. Neither ship will complete full-ship shock trials until 2016. Completing initial operational testing and evaluation of the SUW mission package on the Freedom variant and the MCM mission package on the Independence variant. The Navy completed operational testing of the SUW mission package on the Freedom variant, but has not completed operational testing of the MCM mission package on the Independence variant. DOD largely disagreed with these recommendations, citing the business imperative of not slowing down production of the seaframes. We believe that while the pricing of the seaframes is important, there is greater risk in awarding additional contracts before key knowledge is gained about the capabilities and operational concepts of the LCS. We also recommended in 2013 that the Navy report to Congress on the relative advantages and disadvantages of the two seaframe variants. We recommended that the Navy present to Congress a comparison of the capabilities of the two variants in performing each mission because we had found that the officers in the fleets—the end users of the ships—said that they believed there were advantages and disadvantages to the two designs. Congress directed the Navy in the National Defense Authorization Act for Fiscal Year 2014 to provide additional information on some of the risk areas we identified. The Navy provided Congress with a report in May 2014 assessing the expected survivability attributes and the concept of operations for the ships, but in terms of comparing the two variants the Navy essentially suggested that since the two variants are built to the same requirements they perform the same way. The Navy did not present a more detailed comparison that would address our recommendation. We believe that completing this type of analysis would still be valuable to understanding differences in performance between the seaframes. Survivability and Lethality Requirements Less Than Other Combatants, and Have Been Reduced over Time The Navy designed LCS with survivability and lethality capabilities that are not aligned with the projected operational environment in which the ship will operate, and over time it has lessened or removed some survivability and lethality requirements. The Navy’s original operational concept envisioned LCS as requiring less survivability and lethality features than other surface combatants, which would in turn make LCS less costly than other surface combatants. Over time the Navy has further reduced some survivability and lethality requirements, making LCS less survivable and lethal than it was initially envisioned. And, in response, the Navy continues to refine its operational concepts for LCS. Specific details about changes to these requirements were redacted from this report because they are classified. The Flight 0+ CDD defines the survivability capabilities required after the ship takes a hit, rather than stating specific design requirements as is the case in the earlier Flight 0 CDD. There are three specific design features that would enhance LCS’s survivability that are identified in the Flight 0 CDD, but not in the Flight 0+ CDD. Officials from Office of the Chief of Naval Operations (OPNAV), who are the resource sponsors for the LCS program, stated that these changes were made early on to save cost, and in one instance weight onboard the ship. Specific differences in survivability requirements between the 2004 Flight 0 and the 2010 Flight 0+ CDDs and details about changes to LCS requirements were redacted from this report because they are classified. Since 2004, the Navy has also reduced some LCS lethality requirements. Our analysis shows that the poor performance of some systems might have contributed to this decision. Additional details on these changes are classified. To compensate for any gaps in the ship’s survivability and lethality capabilities, the Navy continues to redefine the concept of operations (CONOPS) for LCS. We reported in 2013 that the Navy had made a number of changes to descriptions of how the LCS might be employed and the capabilities it would bring to the warfighter. We found that documentation developed early on in the program had very optimistic assumptions of where and how LCS could be used, as compared with more current sources, but these assumptions have been lessened over time. By redefining LCS CONOPS, the Navy can help ensure that LCS will be in harm’s way less frequently, which could compensate for the ship’s susceptibility and vulnerability without more costly materiel changes to the ship. While pragmatic, this approach can limit the ship’s utility in the full scope of potential operations and can require more capable ships to be tasked to defend LCS instead of performing other missions. LCS was originally planned to free up more costly ships to perform more complicated missions; partnering LCS with ships providing defensive protection limits the Navy’s ability to achieve these efficiencies. Additional details on these CONOPS changes are classified. Recent SUW Testing Inadequate to Determine If LCS Meets Its Requirements On April 17, 2014, the Navy completed operational testing of the LCS’s SUW mission package, employing an Increment 2 mission package onboard USS Fort Worth (LCS 3). During this test, the ship and its embarked helicopter demonstrated that it could meet the interim requirement for this increment. In prior live SUW test events, LCS did not demonstrate that it could kill all the required targets. Specific details about test events and results were redacted from this report because they are classified. While the April 2014 test proved successful, further testing is needed to demonstrate that both variants of LCS can meet all its SUW requirements—incremental and threshold—and in all the threat environments in which the ships will operate. This is due to the following considerations: LCS did not demonstrate it could meet all its requirements in these Testing only demonstrated that LCS could meet its requirements in one operational test event and is inadequate to provide statistical confidence in the ship’s performance; the test environment was not operationally stressing and the crew got extensive training and practice; Only one of the two variants were tested; and Meeting threshold capability will require missile integration. These issues are discussed below. LCS Did Not Demonstrate It Could Meet All Interim SUW Requirements Recent operational testing has revealed that a Freedom variant LCS was not able to meet all its interim lethality requirements. Specific details of these shortcomings were redacted because the information is classified. SUW Testing Inadequate to Provide Statistical Confidence in Performance DOT&E officials told us that the amount of live testing done to date on the LCS SUW mission package is insufficient to provide statistical confidence that LCS can consistently demonstrate this level of performance. The DOD acquisition instruction states that scientific test and analysis techniques—which DOT&E states includes statistically based measures—should be employed in a test program and provide required data to characterize system behavior. The amount of testing to date is consistent with the approved test plan, but DOT&E stated that the tests were constrained due to the Navy not providing the funding and resources to allow for further testing. Due to the limited number of live operational test runs, DOT&E believes the existing evidence is not sufficient, nor does it predict LCS’s performance in varied environments (e.g., bad weather) or provide sufficient confidence that LCS could repeat this performance in other tests. So, while there is no requirement in the test plan to achieve statistical confidence, as DOT&E states the sparse data available do not allow a strong statement about LCS’s ability to meet requirements in other operational scenarios. As an illustration of this point, the same ship and crew attempted the same operational test event one week prior to the successful run and were unsuccessful before the test event was cancelled due to range restrictions. As such, DOT&E has not yet made its determination that LCS is operationally effective in performing the SUW mission because of a stated lack of available data to support such an assessment. The Navy’s operational test organization has made its determination about effectiveness, which is documented in its final report. Further information about their assessment is classified. Further, while operational testing did demonstrate that LCS could defeat the interim requirement number of Fast Inshore Attack Craft (FIAC), range safety considerations made this testing less operationally stressing than a real-world encounter. Additional information about these issues was redacted because it contained classified information. Operational Testing Limited to Freedom Variant; None to Date on the Independence Variant This operational testing of SUW was conducted using only a Freedom variant LCS. While the guns are the same on the two variants and in the mission package, the gunfire control systems, sensors, consoles, and some enabling software are all different, as are the gun placements and ship handling characteristics. As such, testing on a Freedom variant cannot be used to predict performance of the SUW mission package on an Independence variant. The Navy will not operationally test the initial SUW mission package on the Independence variant until September 2015. As shown in table 4, most of the SUW operational testing on this variant is in the future and program officials told us that the Navy is still gaining an understanding of the effectiveness of the 57mm gun weapon system on the Independence variant. For example, DOT&E told us that in a developmental test in January 2015 the LCS 2 had difficulty achieving a hit on a stationary target with the 57mm gun. Additional details about Independence variant testing were redacted because they contained classified information. LCS Will Not Demonstrate It Can Meet Full Threshold SUW Performance until a Missile Is Integrated LCS will not demonstrate threshold lethality requirements outlined in the CDD until 2017, at the earliest, after the Navy installs and tests the SUW mission package with the Longbow-Hellfire missiles. Since Longbow- Hellfire has not yet been integrated with LCS, the actual performance of the missile on LCS remains unknown. In November 2013, the missile contractor demonstrated that a Longbow-Hellfire missile could be modified to fire vertically from a ship rather than horizontally from a helicopter, and the Navy continues to conduct testing with DOT&E including 2014 testing examining the lethality of Longbow-Hellfire against small boats, though this testing did not use moving sea-based targets. A key challenge in integrating the missile with LCS is managing its weight and accompanying equipment on the ship, given the weight and center of gravity challenges on which we have previously reported. Further, software integration with the combat management system will be required. An analysis of the capability of this missile was redacted because it contained classified content. The Navy Does Not Yet Fully Understand the Extent to Which LCS Will Meet Current Survivability Requirements While the Navy has conducted a variety of surrogate tests and simulations, it has not yet demonstrated whether LCS meets its survivability requirements. As a result, significant unknowns remain regarding the vulnerability, susceptibility, and recoverability of LCS. According to current plans, the Navy will not have completed its test plan to demonstrate the survivability of LCS until approximately 2018, at which point it plans to have more than 24 ships either in the fleet or under construction. The Navy has not fully demonstrated the vulnerability of the seaframes, the susceptibility of the ship to air threats and computer penetrations, or how the crew will respond to damage. If future survivability concerns are identified, the Navy may have to again revise the LCS warfighting CONOPS to compensate for these issues. This could also have implications on the proposed modified LCS, since they plan to leverage the LCS designs. The main risks pertain to the following issues: Vulnerability of the ships due to the use of aluminum and a novel hullform on the Independence variant that has not been fully tested; Air warfare capability; Cybersecurity; and Recoverability of the ships not fully demonstrated. These issues are discussed below. Navy Does Not Fully Understand LCS Vulnerability, and Some Knowledge Gaps Will Remain Unresolved on the Independence Variant The two LCS variants are new ship designs, and the Independence variant uses an aluminum alloy and a trimaran hullform that is unlike other ships in the Navy’s inventory. Therefore, the Navy needed to gather information to characterize how these ships would react to various types of damage. The Navy conducted modeling and simulation activities and surrogate testing, including the following: Weapons effects tests conducted on two decommissioned Finnish aluminum mono-hulled fast-attack craft; Fire tests on representative LCS bulkheads and fire insulation; Underwater explosion testing of representative panels of ship Testing of stress loading on representative Independence variant Penetration tests of representative Independence variant structures; Furnace testing of Independence class types of aluminum to determine response of aluminum to heat and stress loading. Further, the Navy is using computer models and simulations to predict how LCS might react to damage. Subject matter experts in weapons effects, damage control, fire dynamics, and other fields will then analyze the model predictions of primary and secondary damage caused by various weapons. These experts will update and expand on the model predictions to determine how cascading damage and crew response to such damage affect mission capability. Their interpretation of the modeling and simulation results, coupled with lessons learned from other testing and real world events, forms the basis of the assessment of whether the LCS meets its survivability requirements. However, the Navy still lacks robust knowledge in several vulnerability areas, largely related to how fire will affect the aluminum structure of both variants, and how underwater explosions will affect the aluminum trimaran Independence variant. The Navy does not plan to complete its validation and accreditation of the models used to simulate damage until 2017, and its technical experts will not complete their analysis and issue their final survivability assessment reports until approximately 2018. Navy officials stated that until that time its technical warrant holders cannot certify that the two variants meet their survivability requirements and that no further modifications to the design or operational CONOPS are necessary. Navy officials further stated that these reports are typically not finalized until several years after delivery, and cited examples of recent shipbuilding programs including CVN 78, DDG 1000, LPD 17, and LHA 6. However, the lead LCS seaframes were delivered in 2008 and 2009 respectively, meaning that the Navy does not expect to finalize these reports until approximately a decade after delivery. Additional test activities and simulations still remain to be done before the Navy can better characterize the ships’ vulnerability, and the Navy does not plan to fully assess some potential vulnerabilities with the trimaran hull. Knowledge of the Vulnerability of Aluminum Incomplete The Navy still lacks knowledge of how aluminum will react to fire and some blast events, which it does not expect to better understand until it completes a live-fire test event in late 2015. The Freedom variant design has an aluminum deckhouse mated to a steel hull, while the Independence variant is entirely made of aluminum with no steel structure. Historically, many Navy ships have been made largely out of steel, though several classes—recent examples include the CG 47 Ticonderoga class cruisers and the FFG 7 Oliver Hazard Perry class frigates—have utilized an aluminum deckhouse. The lower density of aluminum provides advantages in that it is lighter than steel, which helps LCS achieve its high speed requirement. However, aluminum is also known to lose stiffness more quickly than steel at elevated temperatures in a fire, and the Navy has identified that this phenomenon needs further study on LCS. The Independence variant uses an alloy of aluminum that has not been used in prior Navy ship construction, so accumulated Navy knowledge about how the aluminum on older ships reacts to damage cannot be applied wholesale to the Independence variant. In addition, both variants—though more so the Independence variant—use extruded aluminum planks—complex shapes that are formed by pushing heated aluminum through a die using a hydraulic press. While extrusions have industrial advantages, the Navy has no experience with the damage responses from extruded planks. One shipyard identified this as a knowledge gap in a 2004 report to the Navy, stating that the computer models it used to simulate damage did not account for the use of this type of structure. The Navy plans to conduct live-fire testing on a full-scale mock-up of a section of an Independence variant deckhouse in late 2015 to help provide additional data to mitigate some of these knowledge gaps. This mock-up is called the Multi-Compartment Surrogate, and the Navy plans to test it with internal blasts, fragmentation, and fire. Vulnerability of the Independence Variant Hull to Underwater Shocks Is Unknown The Navy has knowledge gaps related to the underwater shock vulnerability of the trimaran shape of the Independence variant, in part because of a lack of experience with the hullform in other Navy ships. Specifically, technical experts from the Naval Sea Systems Command have stated that they do not fully understand how the hull would react to whipping caused by an underwater explosion. Underwater explosions create a shock wave and a highly compressed gas bubble that expands and contracts. This can cause a type of vertical or horizontal flexing of the ship called a whipping force. The severity of this whipping force and the resulting damage is a function of the size of the explosion and the distance from the hull, among other factors, and not all shocks lead to a whipping response. If the whipping is significant enough, this vibration could cause catastrophic damage and may cause a ship to break apart. Naval Sea Systems Command technical experts have identified a lack of experimental data on the whipping response of a trimaran hullform. These technical experts stated that there is currently no algorithm in existence to model how this hull type would perform, and stated that there is no plan to invest in such an algorithm or a physical hull model for testing since the LCS CDD has no explicit requirement for LCS to survive a whipping response, though it does have an underwater shock requirement. DOT&E has stated that a case could be made that there is an inherent whipping requirement because LCS is supposed to be able to support an orderly evacuation after a mine or torpedo encounter, which would not be possible if the ship were to break apart. The Navy’s existing model has successfully been used to predict the whipping response of other conventional Navy hullforms and is thus being used for LCS modeling. While Naval Sea Systems Command technical experts state that this model contains the requisite physics to model the whipping of a trimaran hullform, they also point out that it has not been validated for this type of analysis and that there is no test data to correlate the results, which is why DOT&E believes a whipping surrogate test is needed. Further, the Navy has not yet conducted a Full Ship Shock Trial on both ships. This test—where both variants will be subject to an underwater explosion and assessed for damage—is planned for 2016. This trial should provide some test data on how the Independence variant responds to a shock event. Though this test will not induce severe whipping motions, the Navy plans to use the data to compare the results with its model predictions. Ship’s Performance in Rough Seas Not Fully Understood, with Both Variants Suffering Damage in Trial The vulnerability of the ship’s hulls to various sea conditions also remains unknown. Due to the dynamic nature of waves, the Navy cannot rely on modeling and simulation alone to provide an accurate assessment of a ship’s performance in rough seas. The Navy conducts seakeeping and structural loads trials to determine with instrumentation how ships will respond to different sea conditions. Seakeeping trials are used to evaluate the way a ship behaves in various sea conditions, while structural loads trials are used to evaluate the effect of waves on the ship’s structure to determine if it is adequate to withstand damage. These tests are planned and executed by the Naval Surface Warfare Center Carderock Division’s Ship Systems Engineering Station in Philadelphia. The Freedom variant has deployed twice and as a result sailed across the Pacific Ocean. In addition, in March 2015 this variant completed a seakeeping and structural load trial in rough water to define its performance characteristics. The Navy started this testing in 2011 on LCS 1, but it was suspended when a hull crack and water leak were found. The Navy has not yet provided an analysis report or details from this event or the subsequent 2015 test, which Navy officials state is still being written. For the Independence variant, the Navy conducted a seakeeping and structural loads trial event on LCS 2 in January-February 2014. In this trial the ship was subject to rough water conditions up to and including sea state 6, defined as having average waves of 8-11 feet and winds of 22-27 knots. This test event—dubbed Phase 2—was following up on earlier Phase 1 testing in lower sea states that was conducted in March 2011 and May 2012. According to the Navy, neither the final test report for the Phase 1 seakeeping trials, nor the final test report for the Phase 2 seakeeping and structural loads trial for this variant, have been finalized, despite these trials occurring several years ago. According to LCS program officials, the ship tested in the Phase 2 trials sustained damage during the testing. The Navy has not yet provided us with the analysis reports, stating that the report is still undergoing revisions. Consequently, we are unable to assess the significance or cause of this damage. DOT&E has reported that this testing resulted in weld cracking to structural stanchions in the mission package bay and has resulted in weight limitations to the launch, handling, and recovery equipment for the mission packages on both LCS 2 and LCS 4—although they have also not seen the test reports. Officials from the LCS program office initially told us that part of the reason for the delay in generating these reports is due to a disagreement between the program office and the technical study team from the Naval Surface Warfare as to the cause of the damage, citing that they do not believe the ship was adequately inspected prior to the trial. The Navy later stated in its technical comments that there was not a disagreement, and that the Navy and the technical team agree that the damage identified after the trial resulted from quality control issues during construction, but cannot confirm that the damage occurred during the trial itself because no pre-trial inspection was conducted. As such, the current value of the data obtained as part of the rough water trial on the Independence variant is in question. The program office has not sought additional assistance from an independent technical authority such as a ship classification society to help analyze the data and determine the vulnerability of the Independence hullform. Classification societies are often used to assess damage to in-service ships; a similar analysis was conducted by ABS after USS Port Royal hit a coral reef in order to provide an independent review of the damage the ship sustained. Air Defense and Cybersecurity Issues Our classified report discussed issues with the air defense and cybersecurity of LCS. Additional information about the performance of LCS in air warfare testing and results from cybersecurity testing has been redacted because it contained classified information. The Navy will not be able to fully demonstrate LCS anti-air warfare capability until it completes two future activities: modeling in a high-fidelity computer simulation called the Probability of Raid Annihilation (PRA) Testbed, and live-fire testing onboard the Self Defense Test Ship. DOT&E also requires “lead ship testing” which was going to occur on LCS 5 and 6 but has now been postponed to LCS 7 and 8. The Navy needs to complete the testbed simulations and live-fire events to characterize LCS’s susceptibility to representative anti-ship cruise missile (ASCM) threats. PRA Testbed: LCS anti-air warfare performance will be modeled through the PRA Testbed—a rigorous modeling and simulation environment using representative LCS combat system suite and weapons configurations. The Navy plans to use this PRA Testbed to conduct a full course of ASCM self-defense assessments, including simulations that would be costly or difficult to test with live targets. This testing was planned to occur in fiscal year 2016 for the Independence variant and fiscal year 2018 for the Freedom variant, but according to DOT&E officials it has slipped to 2017-2018 for the Independence variant and 2018-2019 for the Freedom variant. Self Defense Test Ship: The Navy also plans to conduct live end-to- end testing against both LCS variants and also against the unmanned Self Defense Test Ship. This ship is remote controlled so it can be subjected to live-fire testing, and will be equipped with the same combat system equipment found on the two LCS variants. According to test plans, the Navy envisioned conducting Self-Defense Test Ship tests using Independence class equipment between fiscal year 2015 and 2016 and Freedom class equipment in fiscal year 2016. This testing has slipped to fiscal year 2016 for the Independence variant and 2017 for the Freedom variant. Recoverability of Ship Partially Demonstrated in Testing to Date In 2014, the program office completed the Total Ship Survivability Trial onboard LCS 3, a Freedom variant ship. This test is an at-sea event with the ship’s crew in which damage from threat weapons is simulated. The test allows the Navy to collect information on how well the crew is able to use the installed fire fighting and damage control systems to control damage and reconstitute the ship. The Navy plans to conduct this same test event on the Independence variant in fiscal year 2015. Program officials told us they were generally satisfied with the results of the test; we have not yet had the opportunity to review the final report, as it is still being finalized. This test is important as LCS’s small crew may limit the crew’s damage control efforts following an attack if the damage were severe enough or if it took a long time to combat. According to DOT&E, the test highlighted the existence of significant vulnerabilities in the Freedom class design and that much of the ship’s mission capability was lost because of damage caused by the initial weapons effects or from the ensuing fire that happened before the crew could respond. LCS documentation does not identify how many crew members would have to be lost to degrade the crew’s response capability. Further discussion on this topic is classified and is not included in this report. DOT&E also stated that LCS does not have sufficient redundancy to recover the lost capability. For example, LCS has limited redundancy in its power supply systems, and DOT&E officials told us that the ship does not have the ability to employ auxiliary casualty power systems like the crews can employ on other Navy ships to recover in the event of major damage. According to Naval Sea Systems Command documentation, a casualty power system allows the ship’s crew to make temporary power connections to limited equipment if the installed power connections are damaged, allowing this equipment to keep using the installed shipboard power generation systems. Such a system can facilitate keeping the ship afloat, extinguishing shipboard fires, propelling the ship out of a danger area, or in maintaining communications and a limited self-defense capability. LCS relies instead on separated and redundant battery backup power supplies and the ship build specifications indicate no casualty power equipment on board. A battery backup power may not enable the ship to operate as long as harnessing the ship’s main power generators would allow. Navy officials stated that modern Navy ship designs including DDG 1000 and LPD 17 do not use a casualty power system. Compressed Time Frame for Incorporating Major Program Changes The Navy is currently planning significant changes to the LCS program under a compressed time frame, which provides little opportunity for incorporating knowledge from the results of its survivability and lethality assessments. The Navy completed its Small Surface Combatant Study and just recently provided it to us. The Navy used analysis from this study to decide to modify the two LCS variants to be more survivable and lethal. In December 2014, the former Secretary of Defense announced that he approved the Navy’s recommendation. The former Secretary of Defense further directed the Navy to provide his office by May 1, 2015: (1) an acquisition strategy to support the design and procurement of the new small surface combatant no later than fiscal year 2019 and sooner if possible; and (2) an assessment of the cost and feasibility of back-fitting the existing LCS with enhanced survivability and lethality systems. In addition, he also required the Navy to submit to the Office of the Secretary of Defense a service cost position and a plan to control overall program costs. The Secretary of Defense assumed a total quantity of 52 LCS and small surface combatants, but left the decision on the final number and mix of ships to the discretion of the Navy. However, according to Navy documentation, the Navy is notionally planning on 20 modified LCS—the costs of which have not been fully determined. The House report on the National Defense Authorization Act for Fiscal Year 2015 includes a provision that we analyze the Small Surface Combatant Study and the Navy’s plans moving forward; we just began this work after receiving the Navy’s study, and we plan on issuing a separate report on these issues. As part of this review we will also assess the acquisition strategy and cost and feasibility assessments recently submitted to the Secretary of Defense. According to statements by senior Navy officials, the modified LCS will be redesignated as a frigate. An initial fact sheet issued by the Navy shows that these ships will still be able to carry either the SUW or ASW mission packages, but the Navy will add additional combat capability by including a towed multifunction sonar array, an over-the-horizon missile, and 25mm guns to the seaframes. The Navy is continuing to refine its plans, but initial information, including a recent DOT&E report, indicates that the improvements will largely focus on improving lethality. However, DOT&E stated that the improvements to the ships would not be sufficient to overcome the vulnerability features of LCS. According to the Navy, modifying the LCS allows it to support the current industrial base with no break in production schedule. There are 20 seaframes currently under contract with the shipyards—10 at each shipyard—in various stages of construction in addition to any other planned work at the shipyards. According to the current schedule, the shipyards will be building LCSs already under contract until approximately 2019, and the Navy plans to award additional contracts before transitioning to the frigate. Further, Navy documentation identifies multi- month delays to almost all of the seaframes currently under construction at both shipyards. These late deliveries may prolong the time required for the shipyards to complete work under the existing LCS contracts, meaning that any future delays associated with the introduction of the modified LCS may not impact the workload in the shipyards. As shown in figure 2, the Navy’s proposed acquisition schedule will result in the Navy making key program decisions without the benefit of knowledge gained through ongoing survivability and lethality assessments. For example, the Navy plans to determine its acquisition strategy for the new small surface combatant by May 2015, exercise options to begin upgrading current LCS ships in fiscal year 2017, and buy the lead small surface combatant (modified LCS) by 2019. According to this schedule, the Navy will need to begin estimating and planning the 2019 budget in 2017. However, as previously mentioned, at that time, the Navy will not yet have completed plans to fully demonstrate the survivability of both variants or completed testing to demonstrate that both variants meet threshold lethality requirements. Unknowns about the Independence variant are particularly significant, as that ship has conducted less operational testing to date than the Freedom variant and has not yet deployed overseas. Nevertheless, the Navy still plans to proceed with equal numbers of each variant, as reflected in its 2010 block buy decision. Conclusions The actual lethality and survivability performance of LCS is still largely unproven through realistic testing, 6 years after delivery of the lead ships, with 2 additional ships delivered and 20 more ships under construction and/or under contract. The LCS program was intended to be lower in cost than more capable multi-mission surface combatants, and the Navy limited requirements for susceptibility and vulnerability to help achieve that end. Since the program was initially authorized and funded, costs have increased and the Navy has further reduced the ship’s lethality and survivability requirements. While current plans to protect LCS with more capable ships in higher-threat environments may be a more cost effective solution to addressing capability limitations, these changes also reflect an ever-shrinking set of situations in which LCS can operate without placing added demands on the larger combatants. Further, although the Navy has a significant lack of knowledge of the Independence variant’s lethality and survivability capabilities with no plans to seek additional analysis from an independent technical authority to resolve questions on its rough water trial report, it continues with plans to buy equal numbers of both variants. The Navy is quickly embarking on an effort to redesign the LCS to address lethality and survivability concerns. This effort comprises a major program change. Direction from the former Secretary of Defense to develop an acquisition strategy for the new small surface combatant— and to assess back-fitting some systems on the current LCS—by May 2015 with the first ship procured in 2019 leaves little time to develop important knowledge about current limitations of the ships in these areas. Consequently, the Navy risks leaving some limitations unaddressed, and potentially inefficiently modifying these designs while testing continues. The Secretary of Defense conveyed a sense of urgency by setting this time frame of May 2015, only 5 months after he approved the Navy’s recommendation. The Navy has cited maintaining the industrial base and avoiding a production break as an imperative for its decision-making. Yet, current work in the shipyards will continue through at least 2019. Each shipyard currently has 10 LCS in various stages of construction—some having not even started construction—and most are experiencing schedule delays. The expedited time frame puts the Navy at risk for making decisions uninformed by more complete knowledge about not only the current LCS, but also about precisely what systems will be selected for the modified LCS. Importantly, the Navy will not have completed the survivability assessments for the two variants until 2018. Until that time, the Navy cannot be assured that it fully understands the survivability of the two ships or what capabilities might need to be augmented. Making premature decisions may exacerbate the situation the Navy is in today—with an inventory of ships and systems that do not perform as initially envisioned. At the same time, Congress is being asked to fund three LCS in fiscal year 2016—ships that DOD has acknowledged do not meet its needs. At this point, Congress must consider this funding request before the Navy has completed its new acquisition strategy or assessed the feasibility of backfitting certain upgrades onto the existing ships. While we are making recommendations in this report, we also believe that the recommendations we have made in prior reports with regard to the LCS program still stand as reasonable actions that the Navy should take to improve the program. Matters for Congressional Consideration To ensure that the Navy has provided a clear direction for the future of the program before committing funding to construct additional ships, Congress should consider the following options: 1. In the near term, restrict funding for construction of the three LCS seaframes requested in fiscal year 2016 until the Navy submits to Congress and GAO: a. An acquisition strategy for the modified LCS that has been approved by the Secretary of Defense; b. The Navy’s plans to backfit the existing LCS and an analysis of the cost and engineering feasibility and risks of doing so; and c. A completed rough water trial report for both variants. 2. Not funding some or all of the Navy’s request in fiscal year 2016 for the three LCS seaframes given the Navy’s lack of knowledge of the ships’ survivability and lethality capability. 3. Further, given the uncertainties over the long term about the ship’s survivability and lethality and proposed changes to future ships, consider not fully funding the Navy’s request for future LCS ships beyond fiscal year 2016, pending the completion and analysis of the final survivability assessments for both variants due in 2018. Recommendations for Executive Action To ensure that the Navy has a sound acquisition approach moving forward, we recommend that the Secretary of Defense: 1. Ensure that the commitment to buy 20 modified LCS remains an affordable priority given other acquisition needs; 2. Ensure that the Navy’s acquisition strategy for the modified LCS does not place industrial base concerns ahead of demonstrating the ship’s lethality, survivability, and affordability; and 3. Require the Navy to solicit an independent technical assessment from an organization like a ship classification society on the survivability of the Independence variant seaframe and its ability to meet its applicable requirements. To ensure that the program has requirements that are testable and measurable and to improve realism of LCS operational testing, we recommend that the Secretary of Defense direct the Secretary of the Navy to: 1. Investigate resourcing and conducting more operationally stressing SUW mission package testing onboard LCS, to include testing in a clutter environment and diverse weather and tactical scenarios to help ensure that the ships can operate effectively in their intended environment. In the classified version of this report, we make a separate recommendation to the Secretary of the Navy related to defining a currently vague SUW requirement that we redacted because it contained classified information. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, which are reprinted in appendix II of this report, DOD concurred with two of our recommendations, partially concurred with two others, and did not concur with one recommendation. Regarding our first two recommendations, the department agreed that the Secretary of Defense would ensure that the Navy has a sound acquisition strategy moving forward procuring modified LCS and stated that the Secretary will ensure that industrial base concerns are balanced against cost, schedule and fleet requirements. Our draft report initially stated that the Navy was planning to buy 19 modified LCS based on our reading of the documentation available to us at that time. We have since updated our draft to reflect 20 modified LCS as recommended by the Navy. In addition, although DOD’s response is consistent with the intent of our recommendations, the department indicates that it will conduct its review of the Navy’s approach in advance of preparing the fiscal year 2017 budget. Given that fiscal year 2017 budget will be submitted in only a few months, we are concerned that the Secretary might not have adequate information prior to making funding decisions for the modified LCS. For example, final survivability assessments for both variants will not be issued until 2018—after acquisition decisions for the modified LCS are planned. We will continue to monitor this issue as part of our ongoing work on the Navy’s small surface combatant. DOD did not concur with our recommendation to solicit an independent technical assessment of the survivability of the Independence variant from an organization such as a ship classification society, stating that such an organization could not provide an independent look and would not have the technical competence to perform a threat weapon-based assessment of the survivability of any Navy ship. The intent of our recommendation was for the Navy to solicit an independent assessment of the structural damage sustained by LCS 2 during rough water trials— but not necessarily to assess the ability of the ship to sustain damage from weapons. With that in mind, we suggested a ship classification society as one option, but it is not the only option. The Navy could contract with an independent naval architecture firm or create an internal independent review board to assess the damage and identify a path forward. We believe that ship classification societies would be a viable option because they are not currently involved with the LCS program or other Navy surface combatant acquisition programs. DOD stated that soliciting such an evaluation by the American Bureau of Shipping (ABS)— a ship classification society—would not be an independent look because both LCS seaframe variants were originally designed, built, and classed to ABS standards. However, we note that the relationship between ABS and the Navy ended in 2012. In addition, there are 11 other classification societies that are members of the International Association of Classification Societies. This Association states that classification societies are independent, self-regulating, and externally audited and have no commercial ownership in ship construction. Classification societies inspect and assess the structure of ships to ensure safety and adherence to standards, including after a ship sustains damage. Since LCS 2 sustained damage and given the disagreements between the LCS program office and its technical authority as to the cause of this damage, we continue to believe that soliciting an independent third party is a reasonable recommendation. DOD concurred with our classified recommendation related to defining a currently vague requirement and seeking Joint Requirements Oversight Council validation before operational testing of the remaining SUW mission package increments. DOD partially concurred with our recommendation related to funding more operationally stressing operational tests to include clutter and diverse weather and tactical scenarios. DOD stated that it will provide sufficient test resources, but stated that it does not believe that testing “every aspect” of weather and tactics is necessary. We agree that it is not necessary to test every type of weather and tactical situation, and urge DOD to ensure that testing identified by DOT&E is completed as necessary to fully demonstrate the performance parameters that are included in LCS test plans. The Navy also separately provided over 80 technical comments on our draft report. We reconciled the Navy’s technical comments with evidence we had from discussions with, and documentation from, officials from the Navy, DOT&E, and the Commander, Operational Test and Evaluation Force. We requested additional documentation to support some of the Navy’s comments. We incorporated the Navy’s comments as appropriate, such as to provide additional context in the report, but in some cases the Navy suggested changes or deletions that were not supported by the preponderance of evidence or that were based on a difference of opinion and not supported by fact. In those instances, we did not make the suggested changes. In all, we incorporated many of the Navy’s comments and, in doing so, found that the message of our report remained the same. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-4841 or mackinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Scope and Methodology Although this report is dated December 2015, our findings are current as of July 2015 to be consistent with a classified report issued in July 2015. To identify the extent to which Littoral Combat Ship (LCS) survivability and lethality requirements have changed over time, if at all, we analyzed the LCS capability development documents (CDD) which dictate the performance requirements for the ship and mission packages. We analyzed both the Flight 0 CDD dated 2004 and the updated Flight 0+ revision dated 2010, and compared both of these CDDs to identify areas, if any, where LCS requirements might have changed. We also reviewed the Navy’s Required Operational Capabilities and Projected Operating Environment for LCS Class Ships instruction, which stipulates where LCS was to be employed. We analyzed the LCS build specifications for both variants for Flight 0 and Flight 0+. To determine the extent to which the warfighting concept of operations (CONOPS) continues to evolve, we analyzed the two LCS warfighting concept of operations (2007 and 2011) and spoke with an official from the Navy’s Fleet Forces Command responsible for developing the third revision of the LCS warfighting CONOPS. We attended a portion of the LCS wargame conducted in March 2014. We also analyzed the CDDs for other Navy surface ships to make comparisons with LCS requirements. We reviewed relevant Navy policies stipulating general survivability and shock requirements for ships, and interviewed Navy officials including from the Program Executive Office for LCS for both the seaframes and the mission packages, and obtained written responses from the Office of the Chief of Naval Operations LCS branch. We also interviewed relevant DOT&E officials. To assess the extent to which LCS meets its current survivability requirements, we analyzed Navy and DOT&E test reports for both developmental and operational test events on LCS and reviewed the LCS test and evaluation master plan and the Navy’s Capstone Enterprise Air Warfare Ship Self-Defense test and evaluation master plan. We analyzed the LCS build specifications for both variants for Flight 0 and Flight 0+ ships, and reviewed relevant sections of the American Bureau of Shipping’s Rules for Building and Classing Naval Vessels, which are the technical rules that were used to guide development of the LCS designs. We also analyzed the USS Independence Capabilities and Limitations document (2010) and the USS Freedom Combat System Employment Guide, as well as contractor-developed Vulnerability Analysis Reports and Detail Design Integrated Survivability Assessment Reports, and Navy instructions stipulating ship survivability requirements, including OPNAVINST 9070.1, 9070.1A, and OPNAVINST 9072.2A. We reviewed the Total Ship Survivability Trial test plan and also observed a portion of this test conducted on USS Fort Worth. We reviewed various DOT&E documents, including the Early Fielding Report for LCS, and operational test results from anti-air warfare weapons on other ships. We also analyzed the Navy’s USS Independence Seakeeping and Structural Loads Trial Phase II report. We interviewed relevant Navy officials, including from the Program Executive Office LCS for both the seaframes and the mission packages; and Navy technical experts from the Naval Surface Warfare Center Carderock Division and the Naval Research Lab including technical warrant holders responsible for ship vulnerability and shock tolerances. We obtained written responses from the Program Executive Office Integrated Warfare Systems. We also discussed survivability and crew training issues with LCS Squadron One, Navy Fleet Forces Command, and Navy Surface Forces Pacific officials. To assess the extent to which LCS meets its current lethality requirements, we limited our assessment to the lethality of core seaframe and the SUW mission package because the MCM and ASW mission bring no or little offensive capability packages and both rely on the core seaframe systems for self-defense. The ASW mission package will carry a helicopter that can drop torpedoes, but this system is a well characterized capability that is currently used in the fleet, so we did not assess the effectiveness of this system. Further, the Navy has yet to field a production representative ASW mission package to evaluate. To assess LCS’s performance, we analyzed Navy Commander, Operational Test and Evaluation Force reports including the SUW initial operational test and evaluation report, and Navy Surface Warfare Center Corona Division test analysis reports from LCS developmental test events and DOT&E reports. We reviewed the LCS Live Fire Test and Evaluation Management Plan, and the Navy’s 57mm and 30mm ammunition Live Fire Test and Evaluation Management Plans. We also analyzed the USS Independence Capabilities and Limitations document (2010) and the USS Freedom Combat System Employment Guide. We reviewed the Navy’s Damage Control Book for LCS 3 and the LCS 1 Repair Party Manual. We also interviewed relevant Navy officials, including from the Program Executive Offices for LCS for both the seaframes and the mission packages; the Naval Surface Warfare Center Corona Division; and Navy Commander of Operational Test and Evaluation Force. We also interviewed relevant DOT&E officials. We obtained written responses from the Office of the Chief of Naval Operations LCS branch and from the Program Executive Office Integrated Warfare Systems. We also reviewed the LCS 2 Special Trial report. To assess the recent decisions pertaining to upcoming changes to the program in response to the Secretary of Defense’s concerns with LCS lethality and survivability, we analyzed available Navy documentation on the proposed modified LCS. We also met with the small surface combatant study team. We were not provided with a copy of the study team’s report in time to include an analysis of that document in this report. We conducted this performance audit from June 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Diana Moldafsky (Assistant Director), Greg Campbell, Laurier Fish, Laura Greifner, Kristine Hassinger, C. James Madar, Kenneth Patton, John Rastler, Amie Steele, Roxanna Sun, and Hai Tran made key contributions to this report.
GAO has reported extensively on LCS—an over $34 billion Navy program (in 2010 dollars) consisting of two different ships and interchangeable mission packages. In February 2014, the Secretary of Defense, citing survivability concerns, directed the Navy to assess design alternatives for a possible LCS replacement. House and Senate reports for the National Defense Authorization Act for Fiscal Year 2014 included a provision for GAO to analyze LCS survivability. Based on congressional interest, GAO also examined lethality. This report examines (1) the extent to which LCS survivability and lethality requirements are aligned with the ship's threat environments and if they have changed, (2 and 3) and if LCS meets its current requirements. GAO also (4) assessed recent decisions pertaining to the Navy's plans to address the Secretary of Defense's concerns. GAO analyzed relevant documents and interviewed Navy officials. The lethality and survivability of the Littoral Combat Ship (LCS) is still largely unproven, 6 years after delivery of the lead ships. LCS was designed with reduced requirements as compared to other surface combatants, and the Navy has since lowered several survivability and lethality requirements and removed several design features—making the ship both less survivable in its expected threat environments and less lethal than initially planned. The Navy is compensating for this by redefining how it plans to operate the ships. In 2014, the Navy conducted its first operational test of an early increment of the surface warfare mission package on a Freedom variant LCS, demonstrating that LCS could meet an interim lethality requirement. The Navy declared LCS operationally effective. However, the Navy's test report stated that the ship did not meet some key requirements. Further, the Department of Defense's Director of Operational Test and Evaluation has stated that there is insufficient data to provide statistical confidence that LCS can meet its lethality requirements in future testing or operations, and further testing is needed to demonstrate both variants can meet requirements in varied threat environments. The Navy also has not yet demonstrated that LCS will achieve its survivability requirements, and does not plan to complete survivability assessments until 2018—after more than 24 ships are either in the fleet or under construction. The Navy has identified unknowns related to the use of aluminum and the hull of the Independence variant, and plans to conduct testing in these areas in 2015 and 2016. However, the Navy does not plan to fully determine how the Independence variant will react to an underwater explosion. This variant also sustained some damage in a trial in rough sea conditions, but the Navy is still assessing the cause and severity of the damage and GAO has not been provided with a copy of the test results. Results from air defense and cybersecurity testing also indicate concerns, but specific details are classified. In February 2014 the former Secretary of Defense directed the Navy to assess options for a small surface combatant with more survivability and combat capability than LCS. The Navy conducted a study and recommended modifying the LCS to add additional survivability and lethality features. After approving the Navy's recommendation, the former Secretary of Defense directed the Navy to submit a new acquisition strategy for a modified LCS for his approval. He also directed the Navy to assess the cost and feasibility of backfitting lethality and survivability enhancements on current LCS. Nevertheless, the Navy has established a new frigate program office to manage this program, and the Navy has requested $1.4 billion for three LCS in the fiscal year 2016 President's budget, even though it is clear that the current ships fall short of identified survivability and lethality needs. GAO has an ongoing review of the Navy's small surface combatant study and future plans for the LCS program. This report is a public version of a classified report issued in July 2015. Throughout this report, GAO has indicated where information has been omitted or redacted due to security considerations. All information in this report reflects information current as of July 2015 to be consistent with the timeframe of the classified report.
Introduction The federal government owns and operates numerous multipurpose water projects, many of which generate electric power. This power, which is generated subject to the needs of the project, is sold through five federal power marketing administrations (PMA)—the Southeastern Power Administration (Southeastern), the Southwestern Power Administration (Southwestern), and the Western Area Power Administration (Western) as well as the Alaska Power Administration and the Bonneville Power Administration. The PMAs are separate and distinct organizational entities within the Department of Energy (DOE). They are required to market hydropower primarily on a wholesale basis at the lowest possible rates consistent with sound business principles. By law, the PMAs give preference in the sale of federal power to public bodies and cooperatives (called “preference customers”), such as federal agencies, irrigation districts, municipalities, public utility districts, and other public agencies. Each PMA has its own specific geographic boundaries, federal water projects, statutory responsibilities, operation and maintenance responsibilities, and statutory history. In 1995, the three PMAs in our study—Southeastern, Southwestern, and Western—sold about 1.6 percent of the nation’s electricity. PMAs Market Power Generated at Multipurpose Federal Water Projects A federal water project consists of several resources, such as the dam, the reservoir, the land around the dam and reservoir, and, where hydropower is generated, the powerplant. In addition to providing hydropower, the dams at which hydropower plants are located serve a variety of other purposes, such as promoting fish and wildlife conservation and habitat enhancement and providing flood control, irrigation, navigation, recreation, water supply, and improved water quality. Each project must be operated in a way that balances its multiple purposes. In most instances, because generating power is not the project’s sole purpose, the amount of hydropower generated and marketed is affected by the availability and use of water for the project’s other purposes. The PMAs generally do not own, operate, or control the facilities that actually generate the electric power; almost always, they own, operate, and control the facilities that transmit power, and they market the power that is generated at the federal water projects. The power-generating facilities are controlled by other federal agencies—most often by the Department of the Interior’s Bureau of Reclamation (Bureau) or the Department of the Army’s Corps of Engineers (Corps)—referred to as “operating agencies.” Appendix II lists and describes various laws that guide the Bureau’s and the Corps’ management of federal water projects and hydropower plants. The federal power marketing program, which began in the early 1900s, has developed incrementally over the years. In 1937, the Bonneville Project Act created the Bonneville Power Administration to market federal power in the Pacific Northwest. In 1943, a decision by the Secretary of the Interior established Southwestern under the President’s war powers. The Congress provided the authority to create permanent PMAs with the passage of the Flood Control Act of 1944. The Secretary of the Interior established Southeastern in 1950 and the Alaska Power Administration in 1967. The last PMA, Western, was authorized under the DOE Organization Act of 1977 when the four existing PMAs were transferred from the Department of the Interior to DOE. Many hydropower plants provide electric power for the multiple needs of a federal water project, and the project’s operations have first priority for using it. The PMAs sell the hydropower that exceeds the project’s operational requirements on a wholesale basis to their preference customers and use the revenue earned to repay the costs to generate, transmit, and market power. Revenues from the sale of hydropower are also used to pay for a portion of the irrigation costs assigned for repayment through these revenues where the project serves irrigation. The sale of federal hydropower has also served social and economic development goals. This power is required to be sold at rates that are as low as practicable, consistent with sound business principles, to encourage its widespread use. The PMAs helped make electricity available for the first time to many consumers who lived in rural areas. Nonfederal hydropower projects also generate electricity subject to their multiple purposes. The Federal Energy Regulatory Commission (FERC) licenses and regulates these projects and their hydropower plants that affect the nation’s navigable waterways. FERC’s operating licenses for these hydropower plants are in effect for up to 50 years, after which relicensing must occur. Under provisions of such legislation as the Federal Power Act, as amended by the Electric Consumers Protection Act, FERC’s licensing and regulatory activities establish the conditions under which the project must operate, consistent with legal and policy developments. In licensing and relicensing nonfederal hydropower projects, FERC is required to give equal weight to both “developmental factors” (such as power, irrigation, and flood control) and “nondevelopmental factors” (such as protecting fish and wildlife habitat, conserving energy, and providing recreation). FERC’s regulatory activities with respect to electricity from the PMAs are limited to the authority delegated to it by the Secretary of Energy. FERC’s review of the PMAs’ rates is limited to (1) whether the rates are the lowest possible consistent with sound business principles; and (2) whether the revenues generated by the rates are enough to recover, within the period allowed, the costs of producing and transmitting electricity, including the repayment of the capital investment allocated to generate power and the costs assigned by acts of the Congress for repayment. FERC’s review also includes the assumptions and the projections used in developing the rates. Other than reviewing the PMAs’ rates, FERC has no jurisdiction over the operation of federal hydropower facilities. Appropriations Finance Federal Water Projects and PMAs Each year the Congress appropriates money to the PMAs, the Bureau, and the Corps. The PMAs’ appropriations are generally to cover operations and maintenance (O&M) expenses associated with their power marketing activities and capital investments in their transmission assets. The Bureau’s and the Corps’ appropriations are for all aspects of the federal water projects, including capital investments as well as operation and maintenance (O&M) expenses related to generating power and to providing other functions, such as irrigation and navigation. Federal law calls for the PMAs to set power rates at levels that will repay their appropriations and the power-related O&M as well as the capital appropriations expended by the operating agencies generating the power. DOE’s implementing order specifies that appropriations used for O&M expenses must be recovered in the same year the expenses were incurred; however, it allows the appropriations used for capital investments to be recovered, with interest, over periods that can last up to 50 years. The order also allows the PMAs to defer payments on O&M expenses if the PMAs do not generate sufficient revenue in a particular year because of the variability of hydropower. Because O&M expenses that are deferred are amortized with interest, the amount of deferred expenses accrues interest until it is fully repaid and may require the PMA to increase its rates. The federal investment in water projects has nonreimbursable and reimbursable components. The nonreimbursable component refers to costs that are not reimbursable by revenues collected from the projects’ beneficiaries. The reimbursable component refers to costs that are recovered from the project’s ratepayers and other beneficiaries, such as power and irrigation users. This component includes the construction costs as well as the O&M expenses for power generation, transmission, and marketing; the construction costs allocated to irrigation and O&M expenses for irrigation, if applicable; and the construction costs allocated to municipal and industrial water supply as well as the related O&M expenses. The reimbursable component is further divided into investments repaid with interest (for example, for power and municipal and industrial water supply) and investments repaid without interest (for irrigation only). Proposals Have Been Made to Divest the Federal Government of Its Hydropower Assets In 1986, the executive branch first attempted to sell a PMA when the President’s budget proposed selling the Alaska Power Administration to its preference customers. Despite the enactment of laws in 1995 and 1996 to authorize this transaction, the sale of the hydropower assets from which the Alaska Power Administration markets its power has not been completed, in part because of the need to resolve issues related to rights-of-way and easements. The length of time taken to complete the sale of the smallest of the five PMAs raises questions about the complexity and number of issues that will need to be addressed before the government can divest itself of the larger PMAs and their related hydropower assets. Numerous bills have been introduced to the Congress to sell the remaining PMAs, and some bills have included the sale of the related hydropower assets of the Bureau and the Corps. These bills have proposed selling only the PMA and its assets; the PMA and the related hydropower assets of the Bureau and the Corps; or all of these assets plus the related dams and reservoirs. For example, in 1996 legislation introduced in the House of Representatives proposed to divest, among other things, the PMAs and the associated power-generating assets through a competitive bidding process. The bill proposed that FERC be directed to grant a 10-year operating license to the buyers of the federal hydropower plants. It also exempted the divestiture from certain federal laws pertaining to the disposal of surplus federal property and to environmental protection, such as the Federal Land Policy and Management Act of 1976, the National Environmental Policy Act of 1969, the Endangered Species Act of 1973, and the Wild and Scenic Rivers Act of 1968. Objectives, Scope, and Methodology In response to divestiture proposals, on January 18, 1996, 39 Members of Congress requested that we examine the issues related to the divestiture of the PMAs and related federal hydropower assets. On March 1, 1996, we received a separate request letter from another Member of Congress. We agreed to report on the issues related to divesting the federal hydropower assets, including the PMAs; however, we did not evaluate whether or not the PMAs and federal hydropower assets should be divested. We agreed to provide information on (1) Southeastern, Southwestern, and Western, including their similarities and differences, and their interactions with the agencies that operate federal water projects (mostly, the Bureau and the Corps); (2) the main objectives and general decisions involved in divesting federal assets, along with how these objectives and decisions apply to the PMAs; and (3) the specific issues related to hydropower that should be addressed before a divestiture of the PMAs. As requested, we limited our study to Southeastern, Southwestern, and Western. We did not include the Bonneville Power Administration because it has a unique financial situation or Alaska because it is being divested. A detailed description of our objectives, scope, and methodology is contained in appendix I. We conducted our review from May 1996 through February 1997 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOE (including the PMAs’ liaison office), the Department of the Interior (including the Bureau), FERC, and the Department of Defense (including the Corps). DOE, Interior, and FERC provided us with their written comments. These comments and our responses are included in appendixes VI, VII, and VIII, respectively. We met with officials of the Department of Defense, including the Corps’ Director of Hydropower Operations and the Director of Operations, Construction, and Readiness. Defense stated that our report provided a good assessment of the issues related to the “very complex and controversial” subject. Defense also provided clarifying comments that we incorporated into our report as appropriate. For example, Defense stated that the report needed to be revised to acknowledge that the Corps has improved the generating availability of its hydropower plants in its South Atlantic Division (Atlanta, Georgia) to over 90 percent for fiscal year 1996. Profile of the PMAs While differing in size, scope, and assets, Southeastern, Southwestern, and Western all are responsible for selling hydropower primarily to preference customers—publicly owned utilities and state and federal agencies. These customers vary in size and in the quantity of electricity they purchase. The PMAs have a close working relationship with the Corps and the Bureau because, with a few exceptions, the Bureau and the Corps are responsible for operating the hydropower plants and for ensuring that electricity is generated subject to the other multiple purposes of each federal water project. This relationship is based in part on written documents and also on flexible arrangements that recognize the variability associated with water. PMAs Differ in Service Areas, Customers, and Assets The PMAs generally market power to publicly owned utilities and to state and federal agencies located within their service areas. The three PMAs in our study market power in 30 states from 103 hydropower plants and a coal-fired power plant. Figure 2.1 shows the service areas for each PMA and appendix III lists the hydropower projects from which the PMAs market power. As described below, the PMAs differ in several ways, including the sizes of their service areas, the number of customers served, and the types of assets owned. In fiscal year 1994, Western marketed power to 637 customers in Arizona, California, Colorado, Nebraska, New Mexico, North Dakota, South Dakota, Utah, and parts of Iowa, Kansas, Minnesota, Montana, Nevada, Texas, and Wyoming. Western’s power is largely generated from 56 hydropower plants. They have an existing capacity of 9,808 megawatts (MW) operated mostly by the Bureau. Western owns 16,727 miles of transmission line. In fiscal year 1994, its revenues from power sales were about $658 million, based on about 36.1 billion kilowatt-hours (kWh) of energy sold. Although about 60 percent of Western’s sales are to municipalities, cooperatives, and public utility districts, about 6 percent of its sales are to irrigation districts (see table 2.1). Most of the remaining power sales are to state and federal agencies and investor-owned utilities (IOU). Southwestern, serving Arkansas, Kansas, Louisiana, Missouri, Oklahoma, and part of Texas, marketed power to 62 customers in fiscal year 1994.Southwestern’s power is generated from 24 hydropower plants operated by the Corps with an existing capacity of 2,051 MW. Southwestern’s revenues from power sales in fiscal year 1994 were about $98 million, based on sales of about 6.6 billion kWh. Over 95 percent of Southwestern’s sales are to municipal utilities and cooperatives (see table 2.1). Southwestern also owns 1,380 miles of transmission lines. Southeastern, serving Alabama, Georgia, Kentucky, Mississippi, South Carolina, Tennessee, Virginia, and West Virginia, as well as parts of Florida, Illinois, and North Carolina, sold power to 294 customers in fiscal year 1994. Southeastern’s power is generated from 23 hydropower plants operated by the Corps with an existing capacity of 3,092 MW. Southeastern’s revenues from power sales in fiscal year 1994 were about $156 million, based on sales of about 7.9 billion kWh. Southeastern sold 57 percent of its power to municipalities and cooperatives. The remainder went to federal agencies and public utility districts (see table 2.1). Because Southeastern owns no transmission lines, it relies upon other utilities for transmission services. Preference Customers of PMAs Vary Greatly While the preference customers of PMAs are publicly owned utilities and state and federal agencies that generally purchase small amounts of electricity, they vary greatly. The Types and Size of Customers Vary The types of customers served by Southeastern, Southwestern, and Western vary both in terms of type and size. They include municipalities and cooperatives; public utility districts; irrigation districts; federal agencies, including military and laboratory installations; and state agencies. Some customers are utilities that are among the largest in the nation, while others are among the smallest. Some customers generate much of the electricity they transmit to their customers, while others only transmit electricity they buy from other sources. For all three PMAs, municipalities and cooperatives are by far the most prevalent customers, accounting together for about two-thirds of all customers. Public utility districts and irrigation districts together account for about 8 percent of customers, while federal agencies, including military and laboratory facilities, account for about 7 percent. State agencies account for about 5 percent of all customers and IOUs account for about 3 percent. Figure 2.2 depicts the composition of customers for each PMA. The Size of Preference Customers Also Varies The PMAs’ preference customers also vary in size. As shown in fig. 2.3, about two-thirds (67 percent) of Western’s preference customers are small utilities. About 6 percent of Western’s preference customers are in the medium category and another 6 percent are large. About half (47 percent) of Southwestern’s preference customers are small utilities. However, almost one-third (30 percent) of Southwestern’s preference customers are large. In contrast, almost half (47 percent) of Southeastern’s preference customers are medium-sized utilities. Most Customers Purchase Small Amounts of Electricity Annually The preference customers of the three PMAs also vary in terms of the quantity of electricity purchased. As shown in figure 2.4, although a few customers purchase large quantities of electricity from PMAs, most purchase smaller quantities. For example, in fiscal year 1994, about 83 percent of the preference customers purchased 50,000 MWh or less from the PMAs and over 90 percent purchased less than 100,000 MWh. The PMAs also sell to a few larger customers (about 1 percent of their customers each buy over 1,000,000 MWh). Most Preference Customers Obtain the Majority of Their Electricity From Sources Other Than PMAs Most preference customers obtain the majority of their electricity from sources other than the PMAs. As shown in figure 2.5, about 75 percent of the PMAs’ preference customers purchase less than half of their total electricity from the PMAs. In addition, over 60 percent of the preference customers receive no more than 25 percent of their electricity from the PMAs. Because the PMAs have a limited quantity of power for sale that must be allocated among many preference customers, these customers must obtain most of their electricity from other sources. However, the PMAs differ in how much they provide as a percentage of their customers’ total needs for electricity. About 99 percent of Southeastern’s preference customers purchase no more than 25 percent of their electricity from the PMA. In contrast, Western supplies more than half of the electricity to over 40 percent of its preference customers. Southwestern, on the other hand, supplies no more than 25 percent of the electricity used by most of its preference customers. Yet, it also supplies over 20 percent of its preference customers with at least 75 percent of their electricity. PMA officials and representatives of preference customers maintain that the total portion of electricity the PMAs supply to them does not accurately portray the PMAs’ importance because the PMAs primarily provide power to them during periods of peak demand when electricity from other sources is in relatively short supply. Therefore, measuring the customers’ reliance on the PMAs in terms of their purchases of electric energy (measured in kWh) does not accurately capture the situation of some preference customers, particularly those of Southeastern and Southwestern, that rely more on the PMAs to meet their peak demands for electricity. These customers may use electricity from the PMAs more for meeting peak demands than for providing normal baseload electricity. In response, representatives of IOUs contend that most preference customers could purchase this electricity from other sources. PMAs Sell Power at a Lower Wholesale Cost Than Other Utilities In fiscal year 1994, the PMAs sold power at a wholesale rate that was about one-half of the wholesale rates offered by other utilities. For example, the combined average revenue earned per kWh sold by the three PMAs in our study was about 1.8 cents compared with a national rate of about 3.5 cents for IOUs and about 3.9 cents for publicly owned generating utilities (POG). In fiscal year 1994, Southeastern’s average revenue of about 2.0 cents per kWh compared with wholesale rates of about 4.2 cents per kWh for IOUs and about 5.3 cents for POGs in the region in which Southeastern serves power. In fiscal year 1994, Southwestern received average revenues of about 1.5 cents per kWh. In comparison, IOUs’ average revenues per kWh ranged from about 2.6 to 4.5 cents per kWh, while POGs’ average revenues per kWh ranged from 3.5 to 4.1 cents per kWh in the region in which Southwestern sells power. In fiscal year 1994, Western received average revenues of about 1.8 cents per kWh for its electricity. In contrast, IOUs received average revenues ranging from about 2.7 to 3.5 cents per kWh and POGs’ average revenue ranged from about 3.3 to 4.1 cents per kWh in the region in which Western sells power. According to a PMA official, because of the low rates PMAs offer, the PMAs have informal waiting lists of prospective preference customers that want to buy their power. Although Western is implementing a program to set aside some existing capacity to serve new customers, becoming a new PMA customer is difficult because few customers are willing to give up their power allocations from a PMA and almost no new federal hydropower plants will be coming on line in the foreseeable future. Many factors contribute to the PMAs’ ability to sell electricity at generally lower rates than other neighboring utilities. Importantly, their electricity is primarily generated from hydropower plants, making their power generally less expensive than other sources of power because it has no fuel cost. In addition, because most of these hydropower plants were built when construction costs were lower than more recent construction, the PMAs have lower imbedded costs to recover through their rates. Also, as we discussed in our 1996 report, their rates do not fully recover all of the costs associated with production of power. In some cases, the PMAs are not required to recover some costs (for example, certain environmental costs and the full costs of federal pensions and postretirement health benefits) because of specific legal provisions or because the DOE implementing order excludes the costs or is not specific about them. Also, unlike IOUs, the PMAs do not pay federal income taxes nor do they set their rates to earn a profit. In addition, while the PMAs in our study do not have to build new capacity to meet future demand, IOUs have an obligation to serve all existing and future customers in their service areas. Therefore, they must build new generating capacity and recover the associated capital costs through their rates. This requirement could result in higher rates for IOUs, depending on the cost to increase this capacity. PMAs Work Closely With the Bureau and the Corps The PMAs have a close working relationship with the Bureau and the Corps, which operate and control the hydropower plants and ensure that hydropower is generated subject to the other multiple purposes of federal water projects. These relationships are based on written documents and on flexible arrangements. The PMAs market power subject to the parameters of these written agreements and flexible arrangements. The flexible arrangements allow the operating agencies to balance a project’s multiple purposes, even if this reduces power production. For example, releasing water in the late summer to improve oxygen levels downstream to benefit fisheries reduces the capacity to generate electricity. The Bureau and the Corps Manage the Operation of Federal Water Projects In allocating water among a project’s multiple purposes, the Bureau and the Corps arbitrate among the competing purposes for water. The Bureau operates primarily in the West and manages water in federal water projects mostly for irrigation. The Corps manages water mostly for flood control and navigation. The Bureau and the Corps also provide water for fish and wildlife habitat enhancement, municipal and industrial supplies, recreation, and water quality improvement. How much electricity the PMAs can sell is subject to the Bureau’s and the Corps’ control of the water. How the Bureau and the Corps control the water, in turn, is affected not only by the multiple purposes of a project but by the interests of outside stakeholders. For instance, under provisions of the Clean Water Act, state agencies issue water quality certificates that affect how federal dams are operated and the amount and timing of water that can be released from a reservoir. Moreover, compacts to apportion water among states affect the availability of water for various purposes. The Bureau and the Corps must also frequently consider state environmental laws when managing water resources. For instance, in operating the Central Valley Project (CVP) in California, the Bureau follows a decision by the California State Water Resources Board that directs the CVP and the state water project to meet the state’s standards for fish habitat and water quality, such as the salinity standards for the San Francisco Bay area. To accomplish these standards, the Bureau and the management of the state water project operate under an agreement that describes how water supplies should be shared and who would be responsible for environmental issues. For example, under one aspect of this agreement, the Bureau would be responsible for about 75 percent of the fish and wildlife habitat and water quality responsibilities in some cases. Operating Agencies and the PMAs Interact When Planning Management of a River System The Bureau or the Corps and the PMAs interact when planning the management of a river system, so that releases of water, which are frequently accomplished through the generating turbines of a hydropower plant, can be timed to maximize the use of water for the sale of hydropower. Western and Bureau officials in Salt Lake City, Utah; Sacramento, California; and Billings, Montana; for example, explained that the Bureau prepares annual operating plans that are updated monthly. In January of each year, the Bureau completes the first surveys of mountain snow. By entering the resulting data into its model, the Bureau makes preliminary predictions about run-offs and annual hydrological conditions. Western, water users, environmentalists, and other stakeholders then meet to review the 12-month operating plan. The Bureau updates the plan monthly as new hydrologic information becomes available. For each month in a rolling 12-month period, the annual operating plan contains the following information by reservoir, dam, and hydropower plant: water inflows, water levels, projected water releases, projected water deliveries, and estimated power generation by each hydropower plant according to the maintenance schedule and planned outages. Based on information about hydrology, reservoir levels, and the demand for water, the Bureau issues daily water orders that fine tune water releases and water movements to accommodate the project’s multiple purposes. The staff of the Bureau’s control center and Western’s power dispatchers coordinate water releases so water is released through the turbines to maximize the value of the power generated within the parameters defined by the other multiple purposes of the project. The Objectives of a Federal Divestiture Will Shape General Decisions About a Sale The general process governments use to divest their assets is composed of many decisions. In reviewing domestic and international divestiture experiences, we found a successful divestiture begins with a definition of the sale’s objectives, which typically include (1) reducing or eliminating the government’s presence in an activity that some view as best left to the private sector and (2) improving the government’s fiscal situation. Both of these objectives have been advanced by those who favor the federal government’s divestiture of its hydropower assets. However, those who oppose divesting these assets argue that there are advantages stemming from the government’s current hydropower activities and question whether divesting the federal hydropower assets, including the PMAs, would actually improve the government’s fiscal position. Once a decision has been made to divest certain federal assets, the underlying objectives will shape the sales process. In particular, they will shape the general decisions about which specific assets are sold, what conditions and liabilities will transfer with those assets, and how to implement the sale. Reducing or Eliminating the Government’s Presence in the Private Sector and Lowering the Deficit Are Common Objectives for Selling Government Assets A successful divestiture of government assets generally starts with defining the objectives of a sale. Divestiture proposals have been motivated by two broad objectives, typically in conjunction with one another: (1) to reduce or eliminate the government’s presence in an industry that is viewed as best left to the private sector and (2) to improve the government’s fiscal position. One Typical Objective Is Reducing or Eliminating the Federal Presence in a Largely Private Sector Activity International experience with divestitures suggests that one common objective for divesting government assets was a belief that certain functions being provided by the government would be more efficiently undertaken by the private sector. Some proponents believe this premise is true in the context of federal hydropower assets, because they believe the federal government should not be involved in generating, transmitting, and marketing electricity in wholesale markets. They maintain the following: The historical justification for the federal presence in the electricity industry—to provide electricity at the lowest practicable cost to regions that were too remote or sparsely populated to be served by investor-owned utilities (IOUs)—is outmoded. The entire nation has become electrified; new technologies, such as the gas-fired turbine, generate electricity at relatively low capital costs; and nonutility generators, such as independent power producers, now generate and sell power in wholesale electricity markets that have become increasingly competitive. In addition, the 1992 Energy Policy Act required that a utility make its transmission lines accessible to other utilities (called “open transmission access”), thus enabling customers to obtain electricity from a variety of competing utilities. As the market has become increasingly open, spot and futures markets in bulk power have grown and power marketers and brokers now offer services so wholesale customers can buy the cheapest power available. The tax advantages and other subsidies the PMAs receive give them unfair advantages over their competitors. As we recently reported, federal hydropower is cheaper than wholesale power sold by IOUs and publicly owned generating utilities, in part because hydropower has no fuel cost, but also because the PMAs have received low-interest financing and have flexible repayment terms.If federal hydropower assets are sold, the private sector would operate these assets more efficiently. Proponents believe that the federal agencies do not adequately operate, maintain, and repair these assets. As we recently testified, the government’s capital planning and budgeting systems do not enable federal agencies to fulfill these responsibilities adequately. Furthermore, according to proponents, the private sector would make better decisions about maintenance and investment because the decisions would be based on market signals rather than the federal government’s appropriations and budget cycles. In responding to a draft of our report, Corps officials pointed out that, in some instances, the Corps’ efforts to better operate, maintain, and repair its hydropower plants have paid off, and they cited that the Corps’ South Atlantic Division (Atlanta, Georgia) has improved the generating availability of its hydropower plants to over 90 percent for fiscal year 1996. PMA officials added that not all federal hydropower assets in all regions of the nation exhibit these problems. Another Objective in Divesting Federal Assets Is Improving the Government’s Fiscal Situation International experience also suggests that asset divestitures have been typically motivated by a desire to reduce the government’s debt or deficit. This can include reducing the size or activities of the government. Some policymakers propose selling the federal hydropower assets to improve the federal government’s fiscal position: They believe the cost of the federal hydropower program exceeds its value to the government because, among other reasons, the rates the PMAs charge do not recover all of the costs associated with generating, transmitting, and marketing electricity. If the government would sell these assets, the lump-sum payments would reduce the federal government’s current borrowing requirements. The government would also save money on the annual appropriations that would no longer be needed for the three PMAs and the operating agencies for operating, maintaining, and repairing those assets. While the U.S. Treasury would no longer receive annual revenues from the sale of federal hydropower, proponents of divestiture believe that the sales proceeds the federal government would receive from the divestiture and the reduced government expenditures would more than offset the forgone revenues from electricity sales. Some proponents also contend that a divestiture would eliminate any subsidies to PMA ratepayers. However, assessing the full financial impact on the government from a sale of hydropower assets requires that other indirect costs to the government also be considered. Furthermore, assessing the full financial impact requires examining a variety of revenue and expenditure components, expressing these in present value terms that reflect their timing as well as magnitude, and addressing underlying uncertainties through sensitivity analyses. For instance, the government would incur transactions costs—associated with preparing for and carrying out a divestiture—if it sells the assets. These costs could be significant, particularly in the case of a large-scale public stock offering. Additionally, a variety of labor costs, such as providing severance packages to terminated employees of the PMAs and/or operating agencies, and other costs associated with the disposition of their pension and postretirement benefits would need to be accounted for. Furthermore, a divestiture could create more regulatory responsibilities, and the costs of meeting those increased responsibilities would have to be considered a cost of the divestiture if those costs would not have been incurred otherwise and would be borne by the government. Proponents contend that some of these additional costs may be offset by the additional tax revenues the federal government would receive from sales of electricity if the PMAs and related hydropower assets were sold to IOUs or independent power producers. The Edison Electric Institute (the trade association of IOUs and a strong advocate of divesting federal hydropower assets) maintains that, if the three PMAs in our study were sold to private utilities, the present value of potential federal income taxes on purchasers and bond buyers could equal about $1 billion. However, these taxes may reduce how much a potential purchaser would offer for the PMAs by an amount approximately equal to the tax liabilities. Thus, counting the expected additional tax revenues without considering the offsetting effect on the expected sales price would overstate the financial benefits of the sale. Finally, it is important to note that the budgetary treatment of a sale of federal assets does not reflect the full, long-term financial impact of the sale on the Treasury. For example, current budget rules use a 5-year budget window for scoring government revenues and expenditures.Many observers believe that this period is not long enough to evaluate an asset sale in which lump-sum sales proceeds are compared to changes in expenditure and revenue streams that may continue for up to 50 years. In addition, without legislative change, the sales proceeds from a divestiture could not be used to finance new spending or offset revenue losses. Furthermore, the congressional committees that have jurisdiction over the entities being sold could not count the sales proceeds toward the deficit reduction goals specified under the Budget Enforcement Act of 1990, as amended. This means the committees could not use the proceeds to offset additional expenditures within their budget allocation. However, because the sales proceeds would flow directly to the Treasury, the proceeds would reduce the government’s overall borrowing requirements. Many Question the Need to Divest Federal Hydropower Assets Those who favor the government’s current role in providing hydropower maintain that the debate about divesting hydropower assets should also consider many other effects. They point to long-standing federal policies to use federal water projects to help develop local and regional economies and the importance of the revenue the government receives from the sale of hydropower. For example, as an “aid to irrigation,” power revenue is counted on to repay about 70 percent of the federal government’s (nominal) capital investment in irrigation facilities at federal water projects in the West. Parties that favor continued government ownership argue that the sale of federal hydropower promotes competition. They also assert that private-sector generation and marketing of hydropower formerly provided by the PMAs would lead to greater monopoly power in the electricity industry and higher rates to consumers, especially those in remote rural, low-income areas. In addition, the opponents of the sale believe that the PMAs’ electric rates are not subsidized and that, if the federal government sold its hydropower assets, the taxpayer would lose a steady stream of revenues that over time would exceed their selling price. The Divestiture of Federal Assets Requires Several General Decisions Once a decision has been made to divest, then additional decisions would be needed to answer several broad questions. For instance, what specific assets would be divested? What associated conditions and liabilities would be transferred? And, what methods would be used to value and sell the assets? The final sales proceeds would depend on just what decisions would be made. The Specific Assets to Be Sold Would Need to Be Identified As we found in our review of divestitures in other nations, an important, initial decision in a divestiture involves determining which assets to sell. In this regard, federal hydropower assets could be grouped in several different ways. First, a PMA itself could be sold, including any transmission assets and/or the right to sell the hydropower generated at the Bureau’s or the Corps’ hydropower plants. In a second alternative, a PMA, including its transmission assets and its right to sell power, as well as the Bureau’s or the Corps’ powerplants could be divested. In a third, more complicated alternative, a PMA and all of the aforementioned items as well as the remaining assets related to the water projects (e.g., the dams and the reservoirs) could be divested. An alternative to selling an entire PMA and any related hydropower assets could be to package the assets of a specific project for sale. For instance, Bureau officials in Sacramento, California, opined that the Central Valley Project could be sold to the state of California because the project is contained fully in that state and complements the existing water project that is managed by the state. Another option could be to sell all the federal hydropower plants on a river system together to preserve operating efficiencies because the releases of water from upstream facilities to downstream ones could be more easily coordinated under one-party ownership—an important consideration for flood control and other water management purposes. Trade-Offs Between Liabilities to Be Transferred or Restrictions on Divestiture and the Bids Received Would Need to Be Considered Along with defining the specific assets to be divested, policymakers would have to consider the explicit and implicit liabilities borne by the government and which of those liabilities to transfer to a buyer. As a policy matter, the government may want to retain certain liabilities associated with the assets being divested or place specific restrictions on their postdivestiture use of these assets. However, policymakers would need to consider that assets that are sold with many or relatively onerous restrictions (from the viewpoint of a prospective purchaser) or assets that are in poor condition are correspondingly less attractive and would likely result in lower sales proceeds than otherwise. While the government may still choose to place restrictions or to assign or retain certain liabilities, the financial consequences in terms of the sale price should be assessed. Many combinations of assets and liabilities could be grouped for sale. Both defining and valuing the specific liabilities that the federal government could retain are important because the government may be in a better position to bear certain risks. In general, the government could receive larger sales proceeds by retaining certain liabilities because a purchaser could substantially discount its bid if the purchaser would assume the financial risks associated with those liabilities. For instance, in the proposed divestiture of the United States Enrichment Corporation (USEC), the government would retain liability for the environmental cleanup associated with the prior production of enriched uranium. According to a contractor’s report, decontamination and decommissioning activities at uranium enrichment plants could cost as much as $17.4 billion in 1994 constant dollars. The PMAs are liable for environmental cleanup associated with use of polychlorinated biphenyls and other hazardous waste. While no precise estimates have been made, these liabilities could total many millions of dollars. Assets that are in better operating condition are more likely to receive larger bids than assets in poor condition. We testified recently that federal hydropower plants in the Southeast have experienced significant outages and that these outages occur because of the age of the plants and the way they have been operated. If these hydropower assets were to be sold without reducing the current backlog of necessary maintenance, bids would be lowered. However, a 1995 World Bank review of international experience with divestitures found that in preparing a government enterprise for divestiture, a government should generally refrain from making new investments to expand or improve that enterprise because any increase in sales proceeds is not likely to exceed the value of those investments. Imposing restrictions on operating the assets could also reduce the value to potential buyers. For instance, significant restrictions on using water to generate hydropower at the Glen Canyon Dam have been implemented to protect a variety of natural and cultural resources that are located downstream. According to the Bureau, these restrictions reduced the dam’s generating capacity by an amount exceeding 400 MW, even though total energy production over the course of a day or a season will be largely unchanged. It is almost certain that a new owner of the Glen Canyon Dam would continue to bear the responsibility to operate the dam’s hydropower plants according to these restrictions. As a practical matter, bids by prospective purchasers of the rights to market hydropower produced at Glen Canyon Dam would presumably reflect the diminished revenue potential. Thus, the government would incur much of the financial cost associated with the current restrictions in the form of reduced proceeds from the sales, just as the government would continue to bear this cost if its continued ownership and operation of the dam were maintained. Moreover, uncertainty about the extent of such restrictions likewise increases the uncertainty of expected future revenues and would likely reduce proceeds from the sale. In previous deliberations over divesting federal hydropower assets, including the PMAs, policymakers debated the desirability of ensuring regional control of divested federal hydropower assets. While a decision to limit bidders on particular assets to certain geographic areas would foster a goal of local or regional control of those assets, it could reduce the proceeds from the sale if other potentially interested buyers were precluded from making offers. For example, in the divestiture of the Alaska Power Administration—the only PMA to be offered for sale—an overriding concern was to protect the PMA’s ratepayers from possible increases in electricity rates. This concern led decisionmakers to restrict the eligibility of bidders to only ones from within the state. It also led decisionmakers to accept a sales price approximating the present value of future principal and interest payments that the Treasury would have received instead of establishing the price by selling the assets in an open, more competitive fashion to the highest bidder. The Specific Sales Mechanism and Process Need to Be Determined The objectives underlying a divestiture help determine the most appropriate sales method. For example, if a divestiture were largely motivated by fiscal considerations—with an emphasis on sales proceeds—an appropriate sales mechanism would involve some form of competitive bidding and tend to place few restrictions on the number or identity of bidders. Alternatively, if the major motivation were a desire to transfer operations to the private sector—with an emphasis on a smooth transfer—the government could choose to negotiate a sales price with a selected buyer. In general, we have supported the principle that the federal government should seek the full market value in selling its assets. Sales methods that allow for competitive bidding are more likely to generate this result and lead to the transfer of assets to those buyers who value them most highly. A World Bank survey of international experiences with divestiture indicates that open bidding among competitors is preferable to sales that rely on negotiations with selected bidders because the former method offers less opportunity for favored buyers to receive special treatment at the taxpayers’ expense. In practice, the size of the assets to be sold, in terms of value and scale of enterprise, has influenced the type of sales process used. Trade sales and public stock offerings are general processes, with trade sales used more often to sell smaller enterprises or assets, and public offerings used to sell larger ones. Also, within each type, sales can be organized using competitive bidding methods or negotiations. A brief description of these processes follows: “Trade sales” draw on the idea that an existing set of businesses competing in the relevant line of business (or trade) are likely to offer more and higher bids for the assets. Three key attributes of the PMAs and the electricity industry may lend themselves to a trade sale: (1) The PMAs and related hydropower assets are part of an established industry with capital market connections experienced in the valuation, grouping, and sale of electricity-generating assets. (2) Sales of significant electricity-generating assets are not unusual. (3) There would likely be several bidders for at least large portions of the PMAs and their related assets, depending on how those assets are grouped for sale. A trade sale can be a negotiated sales process between the government and a buyer or can be accomplished using an auction to determine both the sales price of the asset or assets as well as the buyer or buyers. Stock offerings have been used domestically, most recently in the sale of Conrail in 1987, as well as internationally to divest large public enterprises. This method of sale would most likely require creating a government corporation or corporations out of the PMAs and their associated assets. Some of these assets could be grouped for sale, and some could be excluded from the sale, depending on the policy trade-offs discussed. In the case of some federal water projects, for example, the government could decide to retain control of the dam and reservoir to satisfy increasingly significant restrictions on the use of water because of concerns about the environment or endangered species. The stock of the government corporation would be subsequently sold through standard financial market methods, such as a private placement through negotiations between particular investors and the government or through a sale to the general public by using competitive bidding. In cases where auction methods might be selected to sell government assets, recent government experience indicates the importance of carefully choosing the specific format for an auction. That is, a policy decision to choose a competitive auction format requires making many subsequent decisions to define the specific rules leading to an appropriate operational auction. For example, the Federal Communications Commission has chosen to auction the leases of electromagnetic spectrum licenses for use in mobile communications. While generating a large amount of revenue was a less important goal than achieving an efficient geographic allocation of spectrum licenses to communications firms, the auctions generated more revenue than had been predicted by some potential bidders, according to auction analysts. In large part, the success of these auctions was due to careful consideration of the auction format and the identification of particular problematic features of auctions of similar assets in other countries. Most domestic and international divestitures have relied on private capital market firms as consultants and managers because of their frequent experience with complicated and high-valued transactions governing the transfer of assets in the private sector. Particularly in the case of public offerings but also for trade sales, the government would likely incur substantial costs to prepare its assets for sale or to pay for services performed by its financial advisers. For example, in the sale of Conrail, the government employed a variety of financial advisers and, in a key role, a prominent law firm with expertise in a variety of fields, including tax and employment law. Within the government, a variety of possible divestiture management options exist to guide the divestiture process and implement the decisions that must be made. In the Conrail divestiture, the Department of Transportation was primarily responsible for managing the sale. In the ongoing Alaska Power Administration sale, DOE is the lead agency. In our review of the USEC divestiture, we recommended that the Secretary of the Treasury lead the privatization process because that official will not be affected by the privatization and the Secretary’s mission is clearly defined in terms of protecting taxpayers’ general interests. Many Specific Issues Related to Federal Hydropower Would Need to Be Addressed Before a Sale Besides the general decisions that arise from any complex divestiture, many specific issues related to federal hydropower would need to be addressed before a divestiture of federal hydropower assets could be completed. These issues include the multiple purposes of federal water projects; the existing contractual obligations and liabilities of the PMAs, the Bureau, and the Corps; the future responsibility for environmental liabilities and protecting endangered species, which already constrain the operations of many projects; the rights and concerns of Native Americans; and the future regulatory treatment of the hydropower assets. The potential effects on wholesale and retail electric rates, including potential regional economic effects, would also need to be considered. Although determining how wholesale rates would be affected by a divestiture is difficult, the impacts would be influenced by the extent to which customers buy a large portion of their power from the PMAs and the prevailing wholesale rates in the regional market. The impact on retail rates and any regional economic impacts would depend on the extent to which a PMA’s customers would absorb any cost increases or pass them on to their retail customers. A divestiture of hydropower assets would require time and resources. However, complex issues have arisen and been successfully addressed in transfers of assets in the private sector. For example, for nonfederal facilities, balancing the multiple purposes of the water projects has been historically managed through FERC’s licensing process. In addition, when FERC decreased its regulation of the natural gas industry and the industry restructured itself, thousands of new contracts were negotiated and rewritten. The Impact of a Divestiture on Balancing Water Projects’ Multiple Purposes Would Need to Be Addressed The purposes and the management of federal water projects are guided by many statutes, including federal water management and reclamation statutes generally applicable to all projects, specific authorizing and appropriations statutes for individual projects, and environmental protection statutes. Many federal projects serve multiple purposes, such as fish and wildlife habitat protection, flood control, hydropower generation, irrigation, municipal and industrial water uses, navigation, recreation, and water quality improvements. Unless the legislation that authorized a divestiture exempted the water projects from these laws, the statutory provisions would continue to affect how the new owners would manage the projects and how much electricity the new owners could generate. See appendix II for a description of relevant federal statutes. As described in chapter 2, under current arrangements the Bureau and the Corps manage the allocation of water in federal water projects to balance their multiple purposes. The uses of the water are sometimes complementary and sometimes competitive with one other. For example, water is stored in and is released from the reservoir to provide for recreation, but its release through the turbines could be scheduled to generate electricity in a way that is intended to maximize revenues. In contrast, Western’s office in Billings, Montana, forecasts decreases in power revenues in the long-term because water, which would otherwise be used to generate electricity, will increasingly be used for irrigation and other purposes. In its fiscal year 1995 repayment study, Western predicted that revenues from the sale of hydropower could decrease from about $253 million in 2001 to about $213 million (in constant 1995 dollars) in fiscal year 2080 for the Pick-Sloan Program. Under authorizing legislation, such as the Flood Control Act of 1944 and the various reclamation acts, the Bureau and the Corps enjoy some latitude in managing water for various purposes. These agencies’ role in arbitrating between multiple uses becomes especially visible during times of drought. For example, according to Western officials, during the drought of the late 1980s and early 1990s, water was increasingly assigned to irrigation. As a result, power generation suffered significantly in Western’s service area. The role of the Bureau and the Corps has also become increasingly important as population and economic growth have intensified the competition over how water is used. For example, competition for water is now emerging even in areas with abundant rainfall, such as the Southeast. For several years, Alabama, Florida, and Georgia have been contesting the uses of water on two river basins in the Southeast (the Alabama-Coosa-Tallapoosa and the Apalachicola-Chattahoochee-Flint) that are managed by the Corps. Georgia, which contains the headwaters of the waterways in question, needs increased water supplies to provide for the growing population of the Atlanta area, as well as for farming and industry. Florida is concerned about the effects of water levels on its barging industries. It is also concerned about upstream pollution because water from the Chattahoochee and other rivers flows into Apalachicola Bay—a rich source of shellfish and shrimp. Alabama is also concerned about the cumulative impacts of potential water resource actions. In the 1980s, the Corps, responding to requests from several Georgia communities for additional water withdrawals from reservoirs, planned to reallocate water away from generating hydropower to increase the water supply. In June 1990, Alabama sued the Corps, challenging the adequacy of documentation about the environmental impacts of those reallocations and the Corps’ procedures for operating its reservoirs. However, in January 1992, after Alabama put aside the lawsuit, the governors of the three states signed an agreement with the Corps to work together through a study to resolve their issues. This study is projected to be completed in December 1997. Postdivestiture Role of the Bureau and the Corps Would Depend on the Assets Divested The ability of the Bureau and the Corps to continue to balance the purposes of a water project after a divestiture would depend largely on the types of assets that were being sold. If only the PMA and its transmission assets were divested, then the Bureau and the Corps would continue to control how water is allocated, used, and released, because they would continue to own and operate the dams, the powerplants, and the reservoirs. According to Bureau, Corps, and PMA officials, the impact of such a divestiture on the operation of a water project and its multiple purposes would be manageable because the buyer would have to dispatch and market power subject to the Bureau’s and the Corps’ continued presence and decisions about water releases. However, the Bureau and the Corps would have to deal with a nonfederal entity with different incentives than the former PMA, which was a fellow government agency that understood the need to operate so as to meet multiple public purposes. If the PMA, its transmission assets, and the Bureau’s and the Corps’ hydropower plants were sold, then the Bureau and the Corps would retain ownership of the dams and the reservoirs and would continue to plan and manage the water. However, because water is released through both the spillways (which would continue under the Bureau’s or the Corps’ control) and the powerplant (which would be controlled by the nonfederal buyer), a nonfederal entity would have some measure of operational control over how and when water would be released. Bureau, Corps, and PMA officials explained that the operating agencies would have to be more vigilant than they have been when dealing with the buyer of the PMA and the powerplants. If the PMA, its transmission assets, the powerplants, the dams, and the reservoirs were sold, the Bureau and the Corps would no longer be responsible for managing how water is used and balancing the projects’ multiple purposes. As discussed later in this chapter, in this case a regulatory agency, such as FERC, would have to consider the projects’ purposes when licensing and regulating postdivestiture hydropower production and other activities at a divested project. FERC officials noted that nonfederal water projects licensed by the commission also have multiple purposes that must be accommodated. Irrigation Is a Unique Public Purpose That Would Significantly Affect Some Divestitures The irrigation function at federal water projects presents issues for divestitures that differ from the other project purposes. Specifically, as of September 30, 1995, power revenues were scheduled to pay for about $1.5 billion to recoup the federal capital investment for completed federal irrigation facilities. This amount is to be repaid for periods of up to 60 years for individual irrigation projects. Under current repayment practices, this debt is to be repaid without interest, and repayment of the debt can be deferred until the end of the repayment period. Moreover, according to the Bureau’s officials, because capital expenditures on irrigation facilities are expected to continue to increase for renovating and replacing existing facilities as well as constructing new ones, the total amount of “irrigation assistance” could also increase over time. However, most planned irrigation projects likely will not be completed because they are infeasible and not cost-effective. If Western, the related federal water projects, or irrigation projects within Western’s service area were sold to nonfederal entities, the issue of how this federal investment in irrigation would be repaid would have to be addressed. Hydropower is used at some federal projects within Western’s service area to power the pumps that move water from the reservoirs and the canals to the fields. In recent years, as much as about 30 percent of all electricity generated by the Bureau’s hydropower plants in California’s Central Valley Project (CVP) has been used for this “project pumping.” Moreover, at some federal irrigation projects, the rate that has been charged for “project pumping” electricity has been far below the rate that has been charged for commercial uses. According to Bureau officials, at the Eastern Division of the Pick-Sloan Program, the average rate per kWh sold in fiscal year 1995 was about 1.5 cents per kWh, while the rate for project pumping was only about 0.2 cents per kWh. Any divestiture would need to clarify whether the new owners would be required to provide power for irrigation below the rates paid by other customers. If the dams and the reservoirs were sold, then the government would have to negotiate arrangements to accommodate the use of water for irrigation. The Government’s Contractual Obligations Must Be Recognized As agencies of the federal government, the PMAs, the Bureau, and the Corps have entered into a wide range of legally binding contracts in conducting the generation, transmission, and marketing of hydropower. Until the specific terms of a divestiture proposal and the accompanying legislation are known, identifying possible complications that could delay or otherwise affect the sale will be difficult. However, even if the legislation establishes the transferability of these contractual obligations, stakeholders might be able to delay or complicate the divestiture process by filing lawsuits. Although we did not review the thousands of contracts and other agreements that could be affected by a divestiture, according to Bureau and PMA officials, some of the government’s current contracts do not address the transfer of the government’s contractual obligations after a divestiture. Historical precedence exists in the energy sector for addressing extensive and complex contractual obligations. For example, after FERC ordered the restructuring of the natural gas industry, thousands of new contracts were written. FERC Order 636, which was issued in 1992, required, among other things, that all interstate pipeline companies restructure their tariffs, services, rates, and contracts and separate or “unbundle” their gas transportation and storage arrangements. To conform to this order, gas pipeline companies negotiated about 3,800 new contracts with their customers. Selling Power to Preference Customers Is an Important Contractual Obligation One of the PMAs’ most important contractual obligations is selling power to their “preference customers.” The PMAs market hydropower on a wholesale basis at the lowest possible rates, consistent with sound business practices. The three PMAs in our study have contracts to sell power to over 990 customers at cost-of-service rates ranging from about 1.5 cents to about 2.0 cents per kWh. Although these rates may increase in the future, they are significantly lower than the average national wholesale rates of 3.4 cents per kWh for IOUs and about 4.0 cents for publicly owned generating utilities. Currently, the PMAs are renewing their power contracts. Western has extended its contracts in the Pick-Sloan Program through 2020 and is proposing 20-year extensions of power contracts at other projects. According to PMA officials, it is unclear whether a buyer of a PMA would have to continue selling power at low rates to preference customers and, if so, for how long. If a PMA were divested, its contractual obligations with its customers could be assigned in whole or in part to the buyer. Various Interconnection and Transmission Contracts and Agreements Tie PMAs Into Regional Grids In addition to power contracts, the PMAs have entered into interconnection, transmission, and right-of-way contracts and agreements that make them a vital part of regional power grids. For example, in addition to power contracts with 83 customers, Western’s office in Folsom, California, has numerous contracts and agreements for providing transmission and interconnection services, for buying power from utilities in the Pacific Northwest, for delivering power to irrigation projects via the transmission grid of another utility, and for acquiring rights-of-way and easements along transmission lines. Western has a key contract with the Pacific Gas and Electric Company (PG&E), first signed in 1967, that integrates the operations of the PMA and the company. Under this complex contract, Western provides peaking capability to PG&E in exchange for firm power services; PG&E also delivers 880 MW to Western’s preference customers. Moreover, to bring more power to its system, Western also owns part of the Pacific Northwest-Southwest Intertie and an interest in the California-Oregon Transmission Project, which allows Western to transmit power from the Bonneville Power Administration, Pacific Corp, and other utilities in the Pacific Northwest. Although Southeastern has no transmission assets, it has 17 contracts with regional utilities (including regional IOUs, state public power agencies, and electric cooperatives) to transmit power that is generated by hydropower plants the Corps operates. These contracts differ in the services provided, cancellation provisions, and customers served. Seven of the utilities, including the Tennessee Valley Authority (TVA), provide both transmission and ancillary services. These contracts are described in appendix IV. The Bureau and the Corps Also Have Many Contracts and Agreements Because of the number and complexity of their contracts and agreements, the Bureau and the Corps were unable to provide us with information related to every contract and agreement they have implemented at the offices we visited. However, Bureau and Corps officials provided us with information to illustrate the number and types of contracts and other agreements that would have to be assigned or terminated if a project’s dam and/or reservoir were divested. For instance, in Southeastern’s service area, the Corps has over 5,100 agreements for such things as easements for roads and utilities; leases for public parks, agriculture, and concessions; and licenses for fish and wildlife management. See appendix IV for a description of these contracts and agreements. Likewise, the Bureau’s Great Plains Region in Billings, Montana, has over 2,200 contracts and agreements, which include the Bureau’s 580 right-of-use permits for such things as agricultural leases and permits concerning buffers, buildings, crops, drainage, and weed control. See appendix V for a description of these contracts and agreements. Environmental Issues Would Impact the Government’s Ability to Divest Hydropower Assets According to FERC officials, concerns about environmental impacts have begun to affect the generation of hydropower. The uncertainties about the federal government’s future responsibilities in funding and implementing actions to mitigate environmental impacts would greatly affect the divestiture of any hydropower assets. Other types of generating capacity, including coal-fired and nuclear powerplants, have also faced environmental and related constraints that have required costly mitigations. Mitigating Environmental Damages Has Resulted in Forgone Power Revenues The desire to mitigate any potential negative effects of water projects on the environment, especially on the habitat of endangered and threatened species, is increasingly constraining the ability of the Bureau and the Corps, as well as nonfederal entities, to generate hydropower, especially during hours of peak demand. Because of these restrictions, the PMAs have forgone power revenues of millions of dollars since the late 1980s. In an example affecting Southeastern, the South Carolina Department of Wildlife and Marine Resources sued the Corps in 1988, alleging violations of the National Environmental Policy Act of 1969 at the Richard B. Russell Dam. The Russell project has eight hydropower units with a combined capacity of 600 MW—four conventional hydropower units (the last of which came into commercial operation in 1986) and four pumpback units (which have never been in commercial operation). The U.S. District Court for the District of South Carolina found that the Corps had violated the National Environmental Policy Act by failing to complete an environmental impact statement (EIS) and issued an injunction against the installation and operation of the pumpback units. However, the Court of Appeals for the Fourth Circuit partially reversed the district court and allowed the Corps to install the pumpback units but not operate them until another EIS had been completed. This supplemental EIS was completed and a settlement agreement was negotiated that allowed environmental testing. According to Southeastern, the PMA has lost power revenues of about $36.1 million per year since 1994 because of the shutdown. In another example that affects Western, the obligation to protect endangered species has had a significant impact on the CVP’s operations. Bureau officials said that, in response to the Endangered Species Act, the U.S. Fish and Wildlife Service listed the winter run of the Chinook salmon as endangered. According to these officials, to protect the needs of the salmon, the Bureau has restricted the use of the five hydropower units at the CVP’s Shasta powerplant. They added that since 1987 these restrictions have resulted in additional costs of about $50 million to purchase power to meet Western’s contractual obligations. According to officials from the Bureau, FERC, and the PMAs, as well as from environmentalist groups and trade associations, environmental restrictions on water usage to generate power will likely continue in the future. The effects will continue to include lost power revenues or, conversely, increased costs to procure alternative power supplies. For example, waterflow restrictions that are included under the preferred alternative of the final EIS of the Glen Canyon Dam could result in lost generating capacity of 442 MW in the winter and 470 MW in the summer. According to the Bureau, the cost to replace the lost capacity is about $44.2 million per year. The preferred alternative, also known as the “modified low fluctuating flow” alternative, features river flows that are substantially reduced from historic levels, including flows that vary for purposes of maintaining the habitat. The benefit of these modifications in managing water use include enhanced fish habitat and protection of endangered or listed species. Current and Future Environmental Issues Would Affect the Ability of the Government to Divest Hydropower Assets Defining who would be responsible for mitigating the environmental impacts associated with federal water projects after a divestiture is a crucial issue that would have to be addressed when policymakers define the terms and conditions of the transaction. If only the PMA (including the transmission assets) and/or the federal powerplants were divested, then the government’s responsibilities would generally remain the same, unless specified otherwise in the divestiture legislation. If the government were to sell the dams and reservoirs, however, the responsibilities and costs of actions to mitigate environmental impacts would need to be allocated or reassigned. Moreover, with new and more comprehensive actions to mitigate environmental impacts, the uncertainty surrounding the availability of power would also need to be addressed. These actions frequently entail restrictions on releases of water to generate electricity or potentially significant, but unknown, future costs to mitigate environmental impacts. If the PMA and/or powerplants were divested, then uncertainty about the amount of power available for marketing could lower the price that buyers would be willing to pay or discourage some potential buyers from submitting bids. Likewise, if the dams and the reservoirs were divested, uncertainty about the amount of power that could be generated as well as uncertainty over the costs of future environmental mitigations could likewise lower the bids or discourage some prospective buyers from bidding. In addition, the existence of more competitive electric markets would also affect the attractiveness of purchasing the federal hydropower assets. Alternatively, if the government assumes some of the future liability for the costs of actions to mitigate environmental impacts, taxpayers may be forced to bear a significant, but currently unknown, future liability. Moreover, according to officials of DOE’s PMA liaison office, because environmental laws could require an EIS, testing, and cleanup when federal property is sold, additional costs to sell the federal hydropower assets could be incurred. In addition, PMA and Bureau officials stated that, in some cases, actions to mitigate environmental impacts are ongoing and would have to be considered in a divestiture of certain federal hydropower facilities. The Rights and Concerns of Native Americans Would Affect a Divestiture Various rights and concerns of Native Americans would have to be addressed in a proposal to divest federal hydropower assets. These issues include (1) their water rights, (2) their claims to surplus federal property, (3) the need to address rights-of-way for PMA transmission lines across their lands, and (4) the government’s responsibilities under the Native American Graves Protection and Repatriation Act to safeguard their cultural artifacts. In addition, according to Western officials, the PMA is reserving some of its capacity for Native American tribal entities that are expected to become new preference customers. The rights of Native Americans to water must be considered in a divestiture. Several Native American tribal entities hold reserved water rights with senior priority dates (for example, from time immemorial or the 1850s or 1860s) on river systems with federal water projects. Many of these entities have reserved water rights that have yet to be quantified and have water uses that have yet to be determined. The amounts of water associated with these rights and the manner in which the rights are exercised would likely affect hydropower operations and the distribution of power revenues. For example, according to Bureau officials, one legal settlement with tribes of the Fort Peck Reservation, Montana, included rights to about 1 million acre-feet of water from the Missouri River. Other potential claims of Native Americans would affect a divestiture of PMAs and related hydropower assets. For example, under federal legislation, excess federal real property in Oklahoma is subject to transfer to the Secretary of the Interior in trust for Oklahoma Native American tribal entities. According to Southwestern officials, this legislation would complicate a divestiture, although the extent of potential claims by Native Americans under this legislation is difficult to determine because of the lack of information about the prior ownership of lands on which the federal assets are located. According to PMA officials, the PMAs have 880 miles of transmission lines located on rights-of-way that traverse the lands of Native American tribal entities. In the event of a divestiture of a PMA’s transmission assets, if the Native Americans agreed to a transfer of these rights-of-way to a buyer, they could expect compensation. Finally, under the Native American Graves Protection and Repatriation Act, certain Native American cultural artifacts found on federal locations must be returned to the relevant Native American tribal entity. Corps officials responsible for managing federal water projects from which Southwestern markets power explained that they have been involved in numerous cases in the past several years involving this law. Providing federal hydropower to Native Americans would also affect a divestiture. According to Bureau and Western officials, in part because the federal government has a trust responsibility with Native American tribal entities and because those entities are expected to become new preference customers, Western is entering into a process to reallocate the power it will sell to its current and future customers. For example, it is setting aside at least 4 percent of its existing hydropower capacity at the Pick-Sloan Program for Native Americans and other new customers. Western is also changing its rules concerning power reallocations to make it easier for Native American tribal entities to buy federal power. These obligations would complicate a divestiture because they would involve selling power to new preference customers and extending existing contracts—for example, for 20 years (until the year 2020) at the Pick-Sloan Program. Licensing and Regulating Divested Hydropower Assets Would Introduce Uncertainty Into the Divestiture Process Before a sale could be completed, the regulatory treatment of the divested hydropower assets would need to be addressed. While many options for regulating the operations of divested hydropower assets exist, including regulatory regimes that could be established by federal, state, or regional authorities, FERC currently licenses the operation of nonfederal hydropower assets. With the proper resources, FERC officials believe they could license and regulate divested hydropower assets. They stated that the Bureau and the Corps have been able to accommodate emerging issues at federal water projects, such as environmental restrictions on water uses, with more flexibility than FERC’s quasi-judicial licensing process. They also stated that the Commission’s limited flexibility and the timing of its actions on licensing stem from the authority of other federal and state agencies to attach conditions to the license. Currently, FERC primarily regulates the reasonableness of wholesale rates charged by the PMAs and does not provide more detailed oversight of them and the Bureau’s and the Corps’ assets and operations. According to FERC officials, the extent of its regulation after a divestiture would depend upon the specific assets divested. A FERC operating license would not be needed if only the PMA’s assets (its right to market hydropower and, in the case of Southwestern and Western, also the transmission facilities) were divested because the operating agencies would continue to own and operate the powerplants. The operating agencies would continue to manage the water as in the past and the existing restrictions would likely remain in effect. The buyer would market the power subject to the same conditions as the former PMA—subject to the existing purposes of the water project. If a divestiture included the powerplants, the new owner would then be required to obtain a FERC operating license, unless the requirement for FERC’s licensing and regulatory activities were specifically exempted by legislation. Licensing a divested hydropower plant could take a long time; FERC’s licensing process averages 2.5 years but it has taken as long as 10 to 15 years. In granting an operating license for a hydropower plant, FERC is required to weigh the plant’s impact on such “nondevelopmental values” as the environment and recreation. The licensing action involves such numerous studies as the powerplant’s impact on fish, plant, and wildlife species; water use and quality; and any nearby cultural and archeological resources. Moreover, the government of each affected state would perform a water quality certification. In addition, to accommodate any “nondevelopmental values,” FERC could restrict the use of water for generating electricity, resulting in hydropower generating units that have been “derated”—that is, their generating capacity has been reduced. For example, according to studies by the Electric Power Research Institute,from 1984 to 1989, 16 hydropower plants that had been relicensed were actually derated while 8 powerplants increased their capacity. FERC officials cautioned that if the powerplant, dam, and reservoir were sold, then FERC’s licensing process could revisit the management and uses of the water and possibly change the available electric-generating capacity. The uncertainty regarding the length of time to complete FERC’s licensing process as well as the amount of generating capacity after licensing is completed could reduce the number and amounts of bids for the resources. However, if the new owners of a hydropower plant were allowed to operate the plant without a FERC license, they would have a competitive advantage against other operators who are subject to FERC’s licensing requirements. A congressional bill introduced on July 23, 1996, contained provisions that would have provided an operating license with a 10-year term for divested hydropower assets. The owners would then have been subject to a FERC license. In congressional testimony in 1995 regarding divestiture of the PMAs, the Chair of the FERC suggested that divestiture legislation specify an automatic grant of the 10-year operating license and require that the divested powerplants continue to operate according to the preexisting operating agreements. Following a divestiture, FERC would then subject the facility to the normal FERC licensing procedure. According to FERC officials, FERC would be able to regulate divested multipurpose federal hydropower assets because the Commission already has this responsibility for 1,000 nonfederal hydropower facilities. They said that most nonfederal hydropower plants have widespread impacts and multiple uses because their associated dams and reservoirs store water, thereby affecting water upstream, downstream, and across state lines. However, to handle numerous divestitures or complicated divestitures of federal hydropower assets, FERC would need to request congressional authority to add new personnel and resources. Effects of Divestiture on Wholesale Power Rates Would Vary Among PMAs’ Customers Precisely determining how the sale of the PMAs would affect the rates charged to customers is difficult. Some of the PMAs’ customers have expressed concerns that a divestiture of the PMAs could lead to significant rate increases, while some industry analysts have contended that rate increases would be small for most customers. However, some analysts believe that certain customers would be more likely to see larger rate increases than others. These customers are those who currently (1) buy a higher percentage of their total power from a PMA than others do, (2) pay rates for a PMA’s power that are significantly lower than the market rates in the region in which the PMA sells power, and (3) have few or no alternatives for buying power elsewhere at relatively low rates. According to PMA and industry officials, many of these customers are smaller ones located in geographically remote areas. Other factors, such as increasing competition in the wholesale market or mandated limits on rate increases could mitigate the rate increases for these customers. The change in retail rates to end-users (i.e., residential, industrial, and commercial customers) would depend on how much rates increase for the preference customers that serve them. However, the extent to which preference customers pass these increases on to end-users could be affected or mitigated by such things as their ability to increase operating efficiency. Reliance on PMAs for Power Would Affect Which Preference Customers Experience the Greater Rate Increases According to some industry analysts, preference customers who buy a higher percentage of their power from the PMAs would be more likely to experience greater postdivestiture rate increases than those who buy a lower percentage. (Most PMA preference customers buy power from the PMA as well as from other sources, as shown in ch. 2.) For example, if a customer buys 90 percent of its power from the PMA and the buyer of that PMA increases the former PMA’s rates by 50 percent, the preference customer would see its overall rate for power from all sources increase by about 41 percent, if all other factors were held constant. In contrast, if a preference customer buys only 10 percent of its power from the PMA, it would see its overall rate for wholesale power from all sources increase by about 3 percent. Because preference customers differ in how much they use the PMAs for their power, they will not be affected equally by a divestiture. As we mentioned in chapter 2, almost all (99 percent) of Southeastern’s customers purchase less than one-quarter of their total power from that PMA. In contrast, Western provides over 40 percent of its preference customers with more than half of their power. Therefore, if other factors would remain constant, we expect that Western’s customers would generally experience larger average rate increases than customers served by Southeastern. The Difference Between Prevailing Market Rates and Each PMA’s Rates Would Affect Rate Increases Some industry analysts believe that, after a divestiture, the buyer of a PMA would charge rates that conform to the prevailing market rate for wholesale power in the geographic region in which the PMA sells power.As discussed in chapter 2, these prevailing market rates are now significantly more than the rates the PMAs charge their customers. Thus, the difference between what a PMA currently charges its customers and the regional market rate could determine how much a buyer would increase its rates after its sale. The lower the PMA’s current rate (relative to the existing market rate), the greater the rate increase would be. However, the differences between a PMA’s rates and market rates for wholesale power vary across a PMA’s service area. For example, according to our previously cited September 1996 report, the difference between the average wholesale market rates of IOUs and Southwestern’s rates vary across its service area. In one part of Southwestern’s service area, its rates were 1.18 cents per kWh less than the average wholesale rates of IOUs, while in another part of Southwestern’s service area, its rates were 3 cents per kWh less than the average wholesale rates of IOUs. As a result, preference customers in different regions would experience different rate increases in the event of a divestiture. Access to Alternate Power Suppliers Would Affect Rate Increases Those geographically remote preference customers that would not have access to many alternate suppliers of electricity after a divestiture would be the most susceptible to rate increases that would exceed competitive market rates. Conversely, if a preference customer could purchase power at competitive rates from other sources, the buyer of a PMA would be less likely to raise its rates. Representatives of the Edison Electric Institute maintain that because the wholesale market is competitive, very few preference customers will lack access to alternate suppliers following a divestiture. They believe that, after a PMA is divested, preference customers who relied heavily on that PMA will be able to buy power from independent power producers, energy brokers, or energy marketers at a relatively low cost. In addition, they contend that many municipal and cooperative utilities already are competitive participants in the wholesale market. However, representatives of PMAs and their preference customers believe that having access to alternate supplies of electricity is not enough. They note that even in cases where preference customers may buy most of their electricity from alternate sources, these customers often rely on the PMA for power during hours of peak demand, particularly in regions in which Southeastern and Southwestern sell power. Having access to inexpensive power during times of peak demand is important to these customers because typically power sold to meet this demand is more expensive than power sold at other times. Finally, the ongoing deregulation and restructuring of the electric utility industry contributes to the difficulty of assessing the potential impacts of a divestiture. Wholesale electric markets are becoming increasingly competitive, offering preference customers and other utilities the opportunity to buy from more than one supplier of wholesale power. This trend creates additional uncertainty about any potential rate impacts from a divestiture. Changes in Wholesale Rates Would Primarily Determine the Retail Rates Following a divestiture, the retail rates paid by residential, commercial, and industrial consumers would reflect the changes in rates experienced by the preference customers who serve them. For example, retail customers served by preference customers who buy most of their power from the PMA may see significantly higher rate increases than retail customers who buy their power from preference customers that buy a smaller percentage of their total power from the divested PMA. However, in many cases, determining how preference customers would change the retail rates after a sale of federal hydropower assets would be difficult. For example, in competitive markets, some preference customers may be able to avoid passing on increased costs to their retail customers by increasing their operational efficiency. Alternatively, preference customers may choose to reallocate these rate increases from one customer class to another—for example, from industrial end-users to residential end-users—to keep operating costs low at industrial facilities. Changes in Wholesale Power Rates and Water Allocations Would Determine the Regional Economic Impact The degree to which a regional economy would be affected by the divestiture of a PMA would depend mostly on several factors—the regional economy’s reliance on that PMA’s power, the amount of change in overall retail electric rates, the importance of electricity in the regional economy, and the extent to which water allocations from the former federal water projects would be changed. Limited available studies have shown the economic impacts of a rate change by the PMAs to be minor on industrial and residential customers because preference customers have relied on power from PMAs for only a small portion of their total power and electricity has been a relatively small portion of the cost of doing business for most commercial enterprises and industries as well as a small portion of household expenditures. But regional economies that rely on such electricity-intensive industries as primary metals and chemicals would see the greatest amount of economic harm from any rate increases after a divestiture. According to officials of the Electricity Consumers Resource Council (ELCON), the cost of electricity for such industries as aluminum smelters, glass, and chemicals can reach from 30 percent to 40 percent of production costs. For example, in response to TVA’s double-digit rate increases of the 1970s, industries in its service area ceased their operations and in some cases relocated to where electrical rates were lower. TVA’s annual sales to industrial customers declined from about 25 billion kWh in 1979 to 16 billion kWh in 1993. Regional economies that rely heavily on water and water-dependent industries (e.g., in which farming relies extensively on irrigation) would also be affected by changes in water allocations after a divestiture. Depending on the terms of the preexisting contracts and the divestiture legislation, if the dam and reservoir were divested, then the purposes served by the federal water projects and associated water allocations could change. For example, FERC’s operating license could include, subject to existing laws, a condition that more water be used for environmental purposes and less for hydropower.
Pursuant to a congressional request, GAO provided information on: (1) profiles of three power marketing administrations, including their similarities and differences and how they interact with the agencies that operate federal water projects; (2) general parameters of the process by which federally owned assets can be sold; and (3) factors that would have to be addressed in a divestiture of federal hydroelectric assets, such as the relationship between power generation and the other purposes of federal water projects. GAO noted that: (1) the Southeastern, Southwestern, and Western Power Administrations all market the hydropower generated at federal water projects, but they serve different geographical areas and have different assets; (2) their customers vary in size and in their electric energy purchases; (3) PMAs are not the main source of electricity for most of their customers--in total the three PMAs in GAO's report supply about 7 percent of the electricity requirements of their customers; (4) the PMAs have a close working relationship with the Bureau of Reclamation and the Army Corps of Engineers--these interactions are based in part on written agreements and on flexible arrangements that recognize the operating agencies' role in managing water releases in a way that balances a project's multiple purposes. GAO also noted that: (1) two principal objectives have typically been cited by other nations and by the United States for selling government assets: (a) eliminating or reducing the government's presence in an activity that some view as best done by the private sector; and (b) improving the government's fiscal situation; and (2) these two objectives will affect many subsequent decisions needed to implement a sale, including: decisions about such concerns as what specific assets to sell, how to group these assets, what conditions and liabilities to transfer to the buyer, and what sales mechanism to employ. Finally, if, based on a broad policy evaluation of the pros and cons of privatization, a decision to divest federal hydropower assets is reached, several key issues specifically related to hydropower would need to be addressed, including: (1) balancing how water is used among the multiple purposes of federal water projects; (2) determining how to repay or otherwise address the federal capital investment in irrigation facilities of the affected projects; (3) assigning the numerous contractual obligations and liabilities of the Bureau, the Corps, and the PMAs; (4) handling Native Americans' claims to water, property and tribal artifacts; (5) determining the future responsibility for protecting the environment and endangered species--a commitment that already constrains the operations of many projects; (6) deciding the future regulatory treatment of divested hydropower assets; (7) the potential effects of a divestiture on wholesale and retail electric rates, and the regional economies; and (8) these impacts, to a large degree, would be determined by the prevailing wholesale electric rates of the local utilities in the region in which power from the PMA is sold, the region's reliance on this power, and the availability of other sources of power.
Background Although the IED was not a new threat when first encountered during operations in Iraq and Afghanistan, according to DOD officials, U.S. forces were not initially concerned with the IED as a “weapon of choice” until IED attacks began to increase in Iraq at the end of major combat operations. Terrorist and insurgent groups facing overwhelming conventional forces had previously used IEDs in a variety of scenarios, including the 1983 Marine barracks bombing in Beirut, the ship-borne attack against the USS Cole in 1999, and the airborne attacks of September 11th, 2001. In a 2006 report examining the requirements for truck armor, we stated that the Army had previously identified the IED as a threat to U.S. forces prior to the beginning of operations in Iraq. Following the end of major combat in 2003 in Iraq, insurgents began to rapidly adjust their tactics due to the overwhelming firepower and accuracy of U.S. and coalition military forces in conventional warfare. As U.S. forces began to respond to this asymmetric threat, a new tactic emerged as the preferred enemy form of fire, the IED. Beginning in June 2003, IED incidents targeting coalition forces began to escalate from 22 per month to over 600 per month in June 2004. In June 2006, these incidents reached more than 2,000 per month. At one point in 2006, coalition forces in Iraq were experiencing almost 100 IEDs per day. The initial IED attacks in Iraq used nonconventional tactics, techniques, and procedures with a magnitude U.S. forces had not seen before. This threat involved an enemy that takes advantage of and adapts to the environment and is not restricted by conventional rules of engagement. For example, insurgents began using tactics such as buried or camouflaged roadside bombs, vehicle-borne IEDs (car bombs), and suicide bombers to attack coalition forces. Not only was the enemy flexible, but these insurgents also had the ability to rapidly respond to countermeasures. Due to the magnitude and previously mentioned changes made by the enemy in its tactics, techniques, and procedures, several counter-IED gaps related to the IED threat were identified by DOD: technology gaps—shortage of jammers, robots, and other technology, almost none of which were geared towards homemade roadside bombs, personnel gaps—lack of qualified personnel to analyze the threat and collect and distribute information on intelligence, forensic evidence, latest tactics, techniques, procedures, and other data, training gaps—training on latest tactics, techniques, and procedures not available, equipment often supplied without training or instructions, and jammers interfered with communications equipment, funding gaps—little to no dedicated funding for counter-IED efforts, and DOD acquisition process gaps—no process for rapidly developing and fielding new equipment. DOD’s Efforts to Address Counter-IED Capability Gaps Culminated in the Creation of JIEDDO DOD’s efforts to address counter-IED gaps culminated in the creation of JIEDDO. Initially, many different DOD entities began focusing on counter- IED issues in an effort to address capability gaps. JIEDDO emerged through a series of attempts to focus counter-IED efforts, but its development did not follow a formal process. In recognition of the lack of official guidance for planning joint activities, the Office of the Secretary of Defense (OSD) is developing a formal process for establishing future joint organizations. Despite steps taken to focus DOD’s counter-IED efforts, most of the organizations engaged in the IED defeat effort prior to JIEDDO continue to develop, maintain, and in many cases expand their own IED defeat capabilities. Many Different DOD Entities Began Focusing on Counter-IED Issues in an Effort to Address Capability Gaps As IED attacks in Iraq reached nearly 300 per month by October 2003 and over 400 per month by May 2004, many different DOD entities at the service and joint levels began focusing on addressing capability gaps in the areas of counter-IED technologies, qualified personnel with expertise in counter-IED tactics, training, dedicated funding, and expedited acquisition processes. Many of these efforts were carried out by the Army and Marine Corps, in addition to a number of joint and interagency efforts. Army officials stated that within the Army, individual units throughout Iraq began to focus on counter-IED efforts as IED incidents increased in Iraq. Army units developed their own counter-IED tactics, techniques, and procedures as insurgent tactics evolved, and soldiers began using an increasingly wide range of electronic jammers in varying configurations to counter remote-detonated IEDs. According to Army officials, the Army also employed Explosive Ordnance Disposal technicians to disable and dispose of suspected IEDs. Army officials stated that these personnel began relying on remote-controlled robots as the number of IED incidents and the level of complexity of the devices increased, and Explosive Ordnance Disposal technicians were initially among the few personnel with counterexplosives training in-theater. To support these initial efforts, according to Army officials, the Army relied on the Rapid Equipping Force to quickly acquire counter-IED technology such as jammers and robots. This organization was established in 2002 to identify and pursue off-the- shelf or near-term materiel solutions that could be acquired and fielded quickly without having to rely on the Army’s normally lengthy acquisition processes, Army officials stated. The Operational Needs Statement, a process that enables commanders to request a materiel solution for an urgent need, was another method of rapidly acquiring technology solutions. Marine Corps officials stated that initial Marine Corps counter-IED efforts were centered on the Marine Corps Combat Development Command, which was responsible for managing materiel requests, known as Universal Needs Statements, from deployed personnel. This organization began to receive a larger number of counter-IED-related requests as the IED threat escalated, according to Marine Corps officials, increasing from 2 in 2002 to 8 in 2003, and 26 in 2004. Overall, 13 percent of all requests during this 3-year period were counter-IED-related. According to Marine Corps officials, in response, the organization established a counter-IED cell in 2004 to focus exclusively on counter-IED-related requests. The cell was later transferred to the Marine Corps Warfighting Laboratory, and expanded to include personnel with more specialized technical expertise. Marine Corps officials stated that the Urgent Universal Needs Statement was developed during this period as a means of providing commanders with an expedited process for requesting critically needed capabilities, including counter-IED solutions. Through this process, Marine Corps officials stated that they have been able to develop and field equipment in a significantly shorter time frame than the normal acquisition processes, sometimes within several weeks. At the joint and interagency level, a variety of organizations were engaged in intelligence support and counter-IED technology acquisition. Early joint efforts included the Combined Explosives Exploitation Cell, which was established by the Army in 2003 to perform physical, biometric, and tactical exploitation of evidence from IED attack scenes. Staffed by a combination of Army, law enforcement, and intelligence personnel, the organization provided Army, Marines, and Special Forces units with in- theater analyses of IED construction techniques and enemy tactics, techniques, and procedures, and also collected biometric data, such as fingerprints, in an effort to identify specific bomb makers. While the organization often collected evidence from IED attack scenes itself, it also collaborated with Explosive Ordnance Disposal teams and drew on data provided by these teams in their analyses. Since 2004, the Naval Explosive Ordnance Disposal Technology Division has served as the administrative sponsor and primary source of technical and engineering support for the organization. According to an Army official, the Technical Support Working Group was involved in developing counter-IED technology solutions as part of the Combating Terrorism Directorate of the Joint Staff Operations Center, which in turn was responsible for counter-terrorism force protection efforts, including counter-IED efforts. The Terrorist Explosive Device Analytical Center was established in 2003 to leverage law enforcement, the intelligence community, and military capabilities to perform technical and forensic analyses on recovered IED components in the United States and provide actionable intelligence to field personnel. Army officials told us that, in contrast to the Combined Explosives Exploitation Cell, it focused on higher-level strategic issues rather than tactical ones, and included personnel from the Federal Bureau of Investigation, the Bureau of Alcohol Tobacco and Firearms, DOD, and the intelligence community. Army and Marine Corps officials stated that communication and cooperation among these various efforts lacked overall coordination, with multiple entities independently engaged in attempts to address various facets of the larger IED problem. Although the Army had direct experience with IEDs due to its presence in Iraq, according to an Army official, it had no single coordinator for even its own IED defeat efforts. For example, Army officials stated that coordination of counter-IED efforts between Army units occurred in-theater at the working level as personnel facing similar enemy tactics exchanged successful tactics, techniques, and procedures, but little synchronization of Army-wide efforts was occurring. Army and Marine Corps officials stated that some coordination between the Army and the Marine Corps took place, for example, on the use of the different electronic jamming systems used by each service, but communication was carried out on an ad hoc basis and generally occurred only in Iraq as commanders from both services attempted to overcome IED-related threats. At the interagency level, counterterrorism force protection efforts, including the Technical Support Working Group, were coordinated by the Combating Terrorism Directorate of the Joint Staff Operations Center. However, an Army official stated that this organization was left with limited capabilities by 2003 as many of its resources had been reallocated to the Department of Homeland Security after the attacks of September 11, 2001. JIEDDO Evolved through a Series of Attempts to Focus Counter-IED Efforts As IED attacks increased following the invasion of Iraq, JIEDDO evolved through a series of attempts to focus counter-IED efforts. Figure 1 illustrates JIEDDO’s evolution as the IED threat increased from 2003 to 2007. All of the actions, noted in the above timeline, were attempts to coordinate counter-IED efforts and provide funding commensurate with the increased scale of the effort. For example, in late 2003, recognizing the need for closer coordination and greater focus on its counter-IED efforts, the Army took the initial steps toward what would later become a joint-level organization with the establishment of the Army IED Task Force. According to an Army official, the IED Task Force consisted of a coordinating cell in Washington and two field teams in Iraq, and was largely focused on operational and training efforts in an attempt to address both the lack of personnel in-theater with counter-IED training and the need for better training on effective tactics, techniques, and procedures. The field teams, including former Special Forces personnel, developed effective tactics, techniques, and procedures, which they then relayed to the Center for Army Lessons Learned. An IED cell was established at the center to analyze these practices and incorporate lessons learned in the training of outgoing troops to Iraq, while an Army official stated that the coordinating cell in Washington provided leadership and facilitated communication between the field teams, field commanders, and the center. Army officials also stated that the IED Task Force fielded a limited amount of counter-IED-related technology in cooperation with the Rapid Equipping Force, including more sophisticated jamming equipment, vehicle armor, and Explosive Ordnance Disposal robots. However, an Army official stated that with an initial budget of $20 million and no formal authority to coordinate counter-IED efforts outside of the Army, the IED Task Force lacked both the funding and authority to undertake a large- scale, departmentwide effort. In 2004, senior leaders began to believe that greater emphasis should be placed on developing a technology solution rather than focusing on training as well as tactics, techniques, and procedures, according to Army officials. In June 2004, the commander of the CENTCOM wrote a memorandum to the Deputy Secretary of Defense requesting a Manhattan Project-like effort to find a technical solution to the IED problem. In response to the CENTCOM memorandum, the Deputy Secretary of Defense created a Joint Integrated Process Team in July 2004. This team was intended to identify, prioritize, and resource materiel and nonmateriel solutions, and the Army IED Task Force was elevated to the joint level and renamed the Joint IED Defeat Task Force, with a budget of $100 million. In an attempt to enhance visibility over all DOD initiatives and to further focus the counter-IED effort, in June 2005, DOD Directive 2000.19 elevated the Joint IED Defeat Task Force to report directly to the Deputy Secretary, gave it a budget of over $1.3 billion, and clarified its role as the focal point for all efforts in DOD to defeat IEDs. The Joint Integrated Process Team was transformed into an advisory group to the Joint IED Defeat Task Force’s director, and a retired four-star general was recruited to head the Task Force in an effort to raise its profile among other senior DOD leaders, according to a former senior DOD official. However, a DOD official stated that by 2006, the Joint IED Defeat Task Force had begun to encounter difficulties attracting and retaining qualified personnel due to its temporary status. In late 2005, the Office of the Deputy Secretary of Defense began working with the Joint Staff and the Director of Administration and Management to give the task force more permanence and to provide more manpower continuity. Several solutions were proposed by the Director of Administration and Management, including placing the task force within the Joint Forces Command, making it a Staff Element within OSD, or creating a Jointly Manned Entity under OSD. Consequently in February 2006, DOD Directive 2000.19E turned the joint task force into a permanent joint entity and jointly manned activity of DOD—JIEDDO—with a budget of nearly $3.7 billion, with the intention to provide the institutional stability necessary to attract and retain qualified personnel, according to a DOD official. These various actions that led to the development of JIEDDO were done in the absence of DOD having formal guidance for establishing joint organizations. According to a former DOD official, JIEDDO developed largely through informal communication among key individuals in various services and agencies. For example, a former DOD official stated that after the establishment of the Army IED Task Force, the Secretary of the Navy became aware of its work and began meeting regularly with its director, and these meetings eventually led to the idea of elevating the IED Task Force to the joint level. Furthermore, DOD did not systematically evaluate all preexisting counter-IED resources in order to determine whether other entities were engaged in similar efforts within DOD, according to DOD officials. Although the Technical Support Working Group, for example, was already in existence at the time of the establishment of the Army IED Task Force in 2003, an Army official stated that Army officials were under pressure to find an immediate solution and that creating a new working group or task force would be the most efficient approach to overcoming the IED problem. In addition, an Army official stated that existing organizations, such as the Technical Support Working Group, were too focused on technology solutions, but even after the CENTCOM’s 2004 memorandum requesting a technology solution, as noted above, a DOD official stated that the possibility of using preexisting counter-IED resources rather than creating a new organization was not considered when the decision to establish JIEDDO was made. Furthermore, according to a DOD official, the Director of Administration and Management was not tasked to evaluate potential organizational solutions until after the decision to establish a permanent organization had already been made. OSD Is Developing a Formal Process for Establishing Future Joint Organizations In recognition of the increasing number of joint activities and the lack of official guidance for planning them, the 2006 Quadrennial Defense Review called for the development of a formal process for establishing joint organizations in the future. In response, the Director of Administration and Management is currently developing a Joint Task Assignment Process with the goal of ensuring that future joint activities have the appropriate authorities, responsibilities, resources, and performance expectations to carry out their missions. This process will consist of four stages, during which preexisting resources and capabilities will be fully evaluated, the optimal organizational solution will be determined, and all stakeholders will be identified and included in the process. OSD officials stated that although the ultimate solution may range from a Memorandum of Agreement between two existing organizations to a new defense agency, creation of a new organization will be considered if no existing organizations are determined to be capable of fulfilling the mission’s goals. According to OSD officials, the process will be implemented through a formal DOD directive and instruction, and all new joint activities will be required to go through the process before being established. Although development of the process is still ongoing, DOD officials stated that implementation will likely take place in late 2009. Many Efforts to Address the IED Threat Have Continued after the Creation of JIEDDO Despite these steps taken to focus DOD’s counter-IED efforts, many of the organizations engaged in the IED defeat effort prior to JIEDDO continue to develop, maintain, and expand their own IED defeat capabilities. For example, the Army continues to address the IED threat through such organizations as the Army Asymmetric Warfare Office, established in 2006, which coordinates Army responses to asymmetric threats such as IEDs. The Army’s Training and Doctrine Command provides training support and doctrinal formation for counter-IED activities, and the Research, Development & Engineering Command conducts counter-IED technology assessments and studies for Army leadership. Furthermore, an Army official stated that the Center for Army Lessons Learned continues to maintain an IED cell to collect and analyze counter-IED information. Similarly, the Marine Corps continues to address the IED threat through the Marine Corps Warfighting Laboratory, whose Global War on Terror Operations Division is the focal point for all Marine Corps IED countermeasures. DOD officials also stated that the Marine Corps Combat Development Command, the Training and Education Command, and the Marine Corps Center for Lessons Learned have all continued counter-IED efforts beyond the creation of JIEDDO. According to DOD officials, at the joint level, CENTCOM maintains its own counter-IED task force as part of the Interagency Action Group, while Joint Forces Command continues to support counter-IED training and maintain involvement with counter-IED doctrine development. At the interagency level, the Technical Support Working Group continues its research and development of counter-IED technologies. JIEDDO and the Services Lack Full Visibility over Counter-IED Initiatives throughout DOD JIEDDO has taken steps to improve visibility over its counter-IED efforts by, for example, involving the services in the joint counter-IED acquisition process and hosting DOD counter-IED conferences. However, JIEDDO and the services have limited visibility over all counter-IED initiatives throughout DOD in that there is no comprehensive database of all existing counter-IED initiatives. In addition, the services lack visibility over some JIEDDO-funded initiatives that bypass JIEDDO’s acquisition process. JIEDDO and the Services Have Taken Steps to Improve Visibility over Their Counter-IED Efforts Since JIEDDO’s establishment, JIEDDO and the services have taken steps to improve visibility over their counter-IED efforts. For example, JIEDDO, the services, and several other DOD organizations compile some information on the wide range of IED defeat initiatives existing throughout DOD. JIEDDO also promotes visibility by giving representatives from the Army Asymmetric Warfare Office’s Adaptive Networks, Threats and Solutions Division, and the Marine Corps Warfighting Lab, the opportunity to assist in the evaluation of IED defeat initiative proposals. Additionally, JIEDDO maintains a network of liaison officers to facilitate counter-IED information sharing throughout DOD. It also hosts a semiannual conference covering counter-IED topics such as agency roles and responsibilities, key issues, and current challenges. JIEDDO also hosts a technology outreach conference with industry, academia, and other DOD components to discuss the latest requirements and trends in the counter- IED effort. Lastly, the services provide some visibility over their own counter-IED initiatives by submitting information to JIEDDO for its quarterly reports to Congress. No Comprehensive IED Defeat Initiative Database Exists throughout DOD JIEDDO and the services have limited visibility over all counter-IED initiatives throughout DOD in that there is no comprehensive database of all existing counter-IED initiatives. Tasked with leading, advocating, and coordinating all DOD actions to defeat IEDs, JIEDDO is also required by its directive to (1) integrate all IED defeat solutions throughout DOD and (2) maintain the current status of program execution, operational fielding, and performance of approved joint IED defeat initiatives. Another document, JIEDDO’s internal standard operating procedure, requires it to maintain visibility and awareness of all counter-IED initiatives. Despite these requirements, JIEDDO does not maintain a comprehensive database of all IED defeat initiatives existing throughout DOD, which has spent at least $1.49 billion in fiscal years 2007 and 2008 on counter-IED activities outside of JIEDDO. In a previous report, we recommended that JIEDDO develop a database to capture all DOD counter-IED initiatives. In its response to our report, JIEDDO acknowledged the need for such a database and cited ongoing work in partnership with the Director, Defense Research and Engineering, to develop one. JIEDDO is currently developing a management system that will track its initiatives as they move through JIEDDO’s acquisition process. However, this system will only track JIEDDO-funded initiatives—not those being independently developed and procured by the services and other DOD components. Without incorporating service and other DOD components’ counter-IED initiatives, JIEDDO’s efforts to develop a counter-IED initiative database will not capture all initiatives throughout DOD. Though they are required by DOD directive to ensure that JIEDDO maintains visibility over their IED defeat initiatives, the services do not have a central source of information for their own counter-IED efforts. DOD officials stated that there is currently no requirement for each service to develop a comprehensive database of all of its counter-IED initiatives. Without centralized counter-IED initiative databases, the services are limited in their ability to provide JIEDDO with a timely and comprehensive summary of all their existing initiatives. For example, the U.S. Army Research and Development and Engineering Command’s Counter-IED Task Force and the service counter-IED focal points—the Army Asymmetric Warfare Office’s Adaptive Networks, Threats and Solutions Division, and the Marine Corps Warfighting Lab—maintain databases of counter-IED initiatives, but, according to Army and Marine Corps officials, these databases are not comprehensive of all efforts within their respective service. Additionally, of these three databases, only the U. S. Army Research and Development and Engineering Command’s database is available for external use. Since the services are able to act independently to develop and procure their own counter-IED solutions, several service and joint officials told us that a centralized counter-IED database would be of great benefit in coordinating and managing DOD’s counter-IED programs. Two other DOD components maintain counter-IED initiative information repositories, but they also are not comprehensive of all counter-IED efforts within DOD. At the combatant command level, CENTCOM maintains a Web-based information management system to track incoming requirements from its area of responsibility, but the system does not capture nor list all available counter-IED technologies. Additionally, DOD’s Combating Terrorism Technology Support Office’s Technical Support Working Group maintains an information management system that tracks counter-IED technologies resulting from industry responses to broad agency announcements. However, this system is neither searchable by other agencies nor comprehensive of all initiatives being pursued across DOD. The Services Lack Visibility over Some JIEDDO-Funded Initiatives The services lack full visibility over those JIEDDO-funded initiatives that bypass JIEDDO’s acquisition process. In this process, JIEDDO brings in representatives from the service counter-IED focal points to participate on several boards to evaluate counter-IED initiatives, such as the JIEDD Requirements, Resources, and Acquisition Board, and the Joint IED Defeat Integrated Process Team. However, even with these boards, JIEDDO has approved some counter-IED initiatives without vetting them through the appropriate service counter-IED focal points because the process allows JIEDDO to make exceptions if deemed necessary and appropriate. Specifically, the process allows the Director of JIEDDO’s counter-IED training center to make exceptions when training requirements and training support activities need to be accelerated to meet predeployment training requirements. For example, at least three counter-IED training initiatives sponsored by JIEDDO’s counter-IED joint training center were not vetted through the Army counter-IED focal point before being approved for JIEDDO funding. These initiatives included a $9.5 million upgrade to counter-IED training areas, a $19.1 million search rehearsal site to replicate conditions in Iraq, and a $1.5 million initiative to augment the number of personnel trained on IED signal jamming at an Army training center. In addition to not having visibility over these initiatives, Army officials later rejected the transition or transfer from JIEDDO of each of these initiatives for fiscal year 2011. In particular, Army officials rejected the search rehearsal site and signal jamming personnel augmentation initiatives because the Army had already been pursuing similar efforts. JIEDDO officials acknowledged that while it may be beneficial for some JIEDDO-funded initiatives to bypass its acquisition process in cases where an urgent requirement with limited time to field is identified, these cases do limit service visibility over all JIEDDO-funded initiatives. Army officials also cited examples where JIEDDO allowed certain science and technology initiatives with high-technology readiness levels to bypass the first stages of JIEDDO’s process to select initiatives. Officials from the Army’s Adaptive Networks, Threats, and Solutions Division stated that this step limits the Army’s visibility over JIEDDO’s funding decisions. They cited six initiatives that bypassed JIEDDO’s acquisition process, including one designed to predetonate IEDs. While this method may shorten the time required for procurement, it denies the service counter-IED representatives at JIEDDO’s initiative vetting boards the opportunity to review the initiatives. JIEDDO also has bypassed its acquisition process by working directly with individual service units and organizations to address specific counter-IED capability gaps. For example, JIEDDO worked directly with the Army’s Training and Doctrine Command to establish the Joint Training Counter- IED Operations and Integration Center without input from the Army’s Adaptive Networks, Threats, and Solutions Division. As a result, the Army counter-IED focal point was initially unaware of the initiative and expressed confusion about how the initiative would be integrated into the Army’s overall counter-IED effort. Additionally, this training center was not based on a theater-based urgent need. Furthermore, Army officials voiced concerns about the implications of assigning a service responsibility for what is essentially a joint training function. Additionally, officials with the Marine Corps Warfighting Lab described the coordination and accountability challenges involved when JIEDDO’s counter-IED training center works directly with a Marine unit to deliver counter-IED equipment, making it difficult for the Marine counter-IED focal point to monitor counter-IED activity and quantify the amount of funding it receives from JIEDDO. Overall, service officials have said that not incorporating their views on initiatives limits their visibility of JIEDDO actions and could result in approved initiatives that are inconsistent with service needs. This lack of visibility also creates the potential for duplication of effort across the services and other DOD organizations. JIEDDO Faces Difficulties with Transitioning Joint IED Defeat Initiatives to the Military Services Since its creation, JIEDDO has taken steps to support the services’ and defense agencies’ ability to program and fund counter-IED initiatives approved for transition following JIEDDO’s 2- year transition timeline. According to DOD’s Directive, JIEDDO is required to develop plans for transitioning joint IED defeat initiatives into DOD base budget programs for sustainment and further integration. However, JIEDDO’s initiative transitions to the services are hindered by funding gaps between JIEDDO’s transition timeline and DOD’s base budget cycle as well as by instances when service requirements are not fully considered during the development and integration of jointly-funded counter-IED initiatives. JIEDDO Has Taken Steps to Guide the Transition of Joint IED Defeat Initiatives to the Military Services JIEDDO has taken steps to support the services’ and defense agencies’ ability to program and fund counter-IED initiatives approved for transition following JIEDDO’s 2-year transition timeline. For example, in November 2007, JIEDDO developed an instruction with detailed guidance to formally document, clarify, and improve procedures for transitioning JIEDDO- funded initiatives. JIEDDO has also taken steps to keep the services informed of the status of upcoming initiative transitions. For example, it holds a transition working group to provide the services and other agencies with notification of upcoming initiative transitions. JIEDDO also gives transition briefings to several boards and councils throughout DOD to facilitate the transition of joint-IED defeat initiatives. It gives a quarterly briefing to the Joint Staff’s Protection Functional Capabilities Board, a permanently established body responsible for the organization, analysis, and prioritization of joint warfighting capabilities within the protection functional area. To ensure coordination of transition recommendations, JIEDDO also provides annual briefings to the Joint Capabilities Board and the Joint Requirements Oversight Council. JIEDDO also annually updates the Deputy Secretary of Defense’s Senior Resource Steering Group on the transition of initiatives valued greater than $25 million. JIEDDO’s Initiative Transitions Are Hindered by Funding Gaps within the Services’ Budgets JIEDDO and the services still have difficulty resolving the gap between JIEDDO’s transition timeline and DOD’s base budget cycle, causing DOD to rely on service overseas contingency operations funding to sustain jointly-funded counter-IED initiatives. In our 2008 report, we recommended that DOD develop a more effective process to ensure funds designated for sustainment costs are included in its budget cycle. However, DOD still lacks a comprehensive plan to ensure that the services have the proper funding to sustain an initiative following a transition. According to DOD’s Directive, JIEDDO is required to develop plans for transitioning joint IED defeat initiatives into DOD base budget programs for sustainment. As described in its instruction, JIEDDO plans to fund initiatives for 2 fiscal years of sustainment. After that, the initiative is supposed to be either disposed of or passed to one of the services for its continued sustainment through a transition or transfer. In a transition, one of the services is expected to pick up sustainment costs for an initiative by placing it into a base budget program as an enduring capability. In a transfer, one of the services may sustain the initiative through funding for current contingency operations. In comments on our prior report, JIEDDO stated that it would work with DOD to develop a more effective process to ensure that funds designated for sustainment costs are included in its base budget cycle. However, since that report, service officials have stated that JIEDDO’s process has not yet been improved and that JIEDDO’s transition timeline may not allow the services enough time to request and receive funding through DOD’s base budgeting process. As a result, DOD continues to transfer most initiatives to the services for funding as permanent programs, with service overseas contingency operations appropriations, rather than with service base budget funding. According to JIEDDO’s latest transition brief for fiscal year 2010, JIEDDO recommended the transfer of 19 initiatives totaling $233 million to the services for funding through overseas contingency operations appropriations and the transition of only 3 initiatives totaling $4.5 million into service base budget programs. Continuing to fund transferred initiatives with overseas contingency operations appropriations does not ensure funding availability for those initiatives in future years, since these appropriations are not necessarily renewed from one year to the next. In addition to the small number of transitions and transfers within DOD, the services often decide to defer indefinitely their assumption of funding responsibility for JIEDDO initiatives following JIEDDO’s intended 2-year transition or transfer point. According to the fiscal year 2011 JIEDDO transition list, the Army and Navy deferred or rejected the acceptance of 16 initiatives that JIEDDO had recommended for transition or transfer, totaling at least $16 million. Deferred or rejected initiatives are either sustained by JIEDDO indefinitely, transitioned or transferred during a future year, or terminated. When the services defer or reject the transition of initiatives, JIEDDO remains responsible for them beyond the intended 2-year transition or transfer point, a delay that could diminish its ability to fund new initiatives. Lastly, JIEDDO has delivered training aids to the Army without ensuring that it had the appropriate funds to sustain the equipment. As a result, Army officials have stated that they are unable to quickly reallocate funding from current programs to pay for these sustainment costs. For example, JIEDDO provided counter-IED training aids, such as surrogates for mine-resistant vehicles to support training at the Army’s combat training centers, without first coordinating with the Army’s Combat Training Center Directorate to plan for their future sustainment. Consequently, this directorate had not planned for the $12.7 million requirement to sustain the vehicle surrogates and other training equipment. As a result of unplanned sustainment costs such as these, the services could face unexpected, long-term sustainment requirements in the future. JIEDDO’s Initiative Transitions Are Hindered When Service Requirements Are Not Fully Considered JIEDDO’s initiative transitions are also hindered when service requirements are not fully considered during the development and integration of jointly-funded counter-IED initiatives. According to DOD’s Directive, JIEDDO is required to integrate joint-funded counter-IED initiatives throughout DOD. However, service officials stated that transitioning JIEDDO-funded initiatives, such as counter-IED radio jamming systems, is made more difficult when service requirements are not fully considered throughout the systems’ evaluation process. In 2006, DOD established the Navy as single manager and executive agent for ground-based jamming systems for DOD. Under this arrangement, the Navy oversees several boards to review and evaluate jamming system proposals, including a program board at the general officer level and a technical acceptance board at the field officer level. Though the services participate on each of these boards, the counter-IED jamming program board approved, with JIEDDO funding, two ground-based, counter-IED jamming systems that did not fully meet the services’ needs. In the first example, CENTCOM, in response to an urgent operational needs statement originating from its area of operations, published a requirement in 2006 for a portable IED jamming system for use in theater. In 2007, JIEDDO funded and delivered to theater a near-term solution to meet this capability gap. However, Army officials stated that the fielded system was underutilized by troops in Iraq, who thought the system was too heavy to carry, especially given the weight of their body armor. Since then, the joint counter-IED radio jamming program board has devised a plan to field a newer portable jamming system called Counter Remote Control IED Electronic Warfare (CREW) 3.1. According to JIEDDO, CREW 3.1 systems were developed by a joint technical requirements board that aimed to balance specific service requirements for portable systems. While CENTCOM maintains that CREW 3.1 is a requirement in-theater, and revalidated the need in September 2009, officials from the Army and Marines Corps have both stated that they do not have a formal requirement for the system. Nevertheless, DOD plans to field the equipment to each of the services in response to CENTCOM’s stated operational need. It remains unclear, however, which DOD organizations will be required to pay for procurement and sustainment costs for the CREW 3.1, since DOD has yet to identify the source of final procurement funding. In a second example, Army officials stated that they were not involved to the fullest extent possible in the evaluation and improvement process for a JIEDDO-funded, vehicle-mounted jamming system, even though the Army was DOD’s primary user in terms of total number of systems fielded. The system, called the CREW Vehicle Receiver/Jammer (CVRJ), ultimately required at least 20 proposals for configuration changes to correct flaws found in its design after the contract was awarded. Two of the changes involved modifying the jammer so it could function properly at high temperatures. Another change was needed to prevent the jammer from interfering with vehicle global positioning systems. Army officials stated that had they had a more direct role on the Navy-led control board that managed configuration changes to the CVRJ, the system may have been more quickly integrated into the Army’s operations. As this transpired, the Army continued to use another jamming system, DUKE, as its principal counter-IED electronic warfare system. Not ensuring that service requirements are fully taken into account when evaluating and developing counter-IED systems creates the potential for fielding equipment that is inconsistent with service requirements. This could also delay the transition of JIEDDO-funded initiatives to the services following JIEDDO’s 2-year transition timeline. JIEDDO Lacks Clear Criteria for Defining What Counter-IED Training Initiative It Will Fund JIEDDO devoted $454 million in fiscal year 2008 to support service counter-IED training requirements through such activities as constructing a network of realistic counter-IED training courses at 57 locations throughout the United States, Europe, and Korea. Although JIEDDO has supported service counter-IED training, its lack of clear criteria for the counter-IED training initiatives it will fund has affected its counter-IED training investment decisions. According to its directive, JIEDDO defines a counter-IED initiative as a materiel or nonmateriel solution that addresses joint IED defeat capability gaps, but the directive does not specifically lay out funding criteria for training initiatives. Since our last report, JIEDDO has attempted to clarify what types of counter-IED training it will fund in support of in-theater, urgent counter-IED requirements. In its comments to our previous report, JIEDDO stated that it will fund an urgent in-theater counter-IED requirement if it “enables training support, including training aids and exercises.” JIEDDO also stated in its comments that it will fund an urgent in-theater counter-IED requirement only if it has a primary counter-IED application. Beyond JIEDDO, CENTCOM officials have stated that they will process counter-IED capabilities only if they are primarily related to countering IEDs. Though JIEDDO has since published criteria for determining what joint, counter-IED, urgent requirements to fund, it has not developed similar criteria for the funding of joint training initiatives not based on urgent requirements. As a result, JIEDDO has funded training initiatives that may have primary uses other than defeating IEDs. For example, since fiscal year 2007, JIEDDO has spent $70.7 million on role players in an effort to simulate Iraqi social, political, and religious groups at DOD’s training centers. JIEDDO also spent $24.1 million on simulated villages at DOD’s training centers in an effort to make steel shipping containers resemble Iraqi buildings. According to Army officials, these role players and simulated villages funded by JIEDDO to support counter-IED training are also used in training not related to countering IEDs. Lastly, according to its 2008 annual report, JIEDDO used counter-IED funding to purchase authentic Iraqi furniture and other items to create a realistic environment for counter-IED search rehearsals. In March 2009, JIEDDO attempted to clarify its criteria for training initiatives not based on urgent requirements by requiring counter-IED training initiatives to be (1) counter-IED related, (2) joint in nature, (3) derived from an immediate need, and (4) unable to be funded by a service. As with JIEDDO’s urgent needs criteria for training, these guidelines could also be broadly interpreted, as demonstrated by the above examples. Without criteria specifying which counter-IED training initiatives it will fund, JIEDDO may diminish its ability to fund future initiatives that are more directly related to the counter-IED mission. DOD also could hinder coordination in managing its resources, as decision makers at both the joint and service level operate under unclear selection guidelines for which types of training initiatives should be funded and by whom. Conclusions JIEDDO and the services lack full visibility and coordination of the wide range of counter-IED measures throughout DOD, which presents difficulties for DOD in efficiently using its resources to defeat IEDs. While JIEDDO and the services have taken important steps to focus counter-IED efforts, DOD remains challenged in its effort to harness the full potential of its components towards an integrated effort to defeat IEDs. In addition, difficulties remain in maintaining visibility over all counter-IED activities throughout DOD, coordinating the transition of JIEDDO initiatives, and clearly defining the types of training initiatives it will fund. If these issues are not resolved, DOD’s various efforts to counter IEDs face the potential for duplication of effort, unaddressed capability gaps, integration issues, and inefficient use of resources in an already fiscally challenged environment. As a result, DOD may not be assured that it has retained the necessary capabilities to address the IED threat for the long term. Recommendations for Executive Action We are making five recommendations to address the issues raised in this report: To improve JIEDDO’s visibility over all counter-IED efforts, we recommend that the Secretary of Defense direct the military services to create their own comprehensive IED defeat initiative databases and work with JIEDDO to develop a DOD-wide database for all counter-IED initiatives. To further provide DOD visibility over all counter-IED efforts in cases where initiatives bypass JIEDDO’s rapid acquisition process, we recommend that the Secretary of Defense direct JIEDDO to develop a mechanism to notify the appropriate service counter-IED focal points of each initiative prior to its funding. To facilitate the transition of JIEDDO-funded initiatives, we recommend that the Secretary of Defense direct the military services to work with JIEDDO to develop a comprehensive plan to guide the transition of each JIEDDO-funded initiative, including expected costs, identified funding sources, and a timeline including milestones for inclusion into the DOD base budget cycle. To facilitate the transition of JIEDDO-funded initiatives, we recommend that the Secretary of Defense direct JIEDDO to coordinate with the services prior to funding an initiative to ensure that service requirements are fully taken into account when making counter-IED investment decisions. To better clarify what counter-IED training initiatives JIEDDO will fund, we recommend that the Secretary of Defense direct JIEDDO to evaluate counter-IED training initiatives using the same criteria it uses to evaluate theater-based joint counter-IED urgent requirements, and incorporate this direction into existing guidance. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD fully agreed with three of our recommendations and partially agreed with two other recommendations. However, DOD expressed concerns that our report focuses on counter-IED initiative challenges from a service perspective rather than a combatant command urgency of need. While we recognize JIEDDO’s mission and contribution in supporting urgent warfighter needs, as DOD’s focal point for coordinating counter-IED efforts throughout the department, JIEDDO is tasked with the integration of all IED defeat solutions throughout DOD, which includes the integration of service requirements during the development of counter-IED initiatives. DOD also stated that our report focused on a handful of initiatives or efforts that encountered friction during either the development phase or the coordination process to transfer, transition, or terminate the program. While we recognize JIEDDO’s progress to successfully transition some initiatives to the services, the examples used in the report highlight the challenges noted in our work and identify areas for improvement. Furthermore, DOD generally agreed with our recommendations to address these challenges. In commenting on our recommendation for JIEDDO and the services to develop a DOD-wide database for all counter-IED initiatives, DOD concurred and noted that JIEDDO is supporting the Army Research Development and Engineering Command’s effort to establish a JIEDDO- hosted network solution that establishes a common collaboration tool to link these databases and provide comprehensive visibility across DOD for all counter-IED efforts. However, this initiative does not describe how the services will develop a comprehensive database for each of their own counter-IED efforts. While we recognize that this ongoing effort is a step in the right direction, until all of the services and other DOD components gain full awareness of their own individual counter-IED efforts and provide this input into a central database, any effort to establish a DOD- wide database of all counter-IED initiatives will be incomplete. In commenting on our recommendation for JIEDDO to develop a mechanism to notify the appropriate service counter-IED focal points of initiatives that bypass JIEDDO’s acquisition process prior to its funding, DOD concurred and stated that JIEDDO will take action to notify stakeholders of all JIEDDO efforts or initiatives, whether or not initiatives are required to go through the Joint Improvised Explosive Device Defeat Capability Approval and Acquisition Management Process (JCAAMP). JIEDDO will also inform stakeholders and elicit their opinions on JIEDDO developmental efforts in order to decrease duplication of efforts and allow services greater lead time to review these efforts. DOD noted that this process will be incorporated in the pending update of JCAAMP. We agree that if implemented, these actions would satisfy our recommendation. In commenting on our recommendation for JIEDDO to develop a comprehensive plan to guide the transition of each JIEDDO-funded initiative, including expected costs, identified funding sources, and a timeline including milestones for inclusion into the DOD base budget cycle, DOD concurred and noted that the Navy and Marine Corps are working on efforts to improve the transition of JIEDDO-funded initiatives. DOD also stated that it has developed recommended changes for DOD Directive 2000.19E that will address coordinating the transition of counter- IED solutions. DOD noted that these changes will be staffed to DOD and the services during the periodic update of DOD Directive 2000.19E. We agree that if implemented, these actions would satisfy our recommendation. In commenting on our recommendation for JIEDDO to coordinate with the services prior to funding an initiative to ensure that service requirements are fully taken into account when making counter-IED investment decisions, DOD partially concurred. DOD noted that JIEDDO responds to in-theater requirements that have joint applications but may not have service specific applications. DOD also stated that fully vetted coordination with the services prior to funding an effort or initiative could delay the fielding of material that would save lives. DOD therefore suggested that this recommendation be incorporated with our second recommendation to notify the services of all JIEDDO-funded initiatives or the language to this recommendation be changed to reflect DOD’s position. While we recognize the need to respond rapidly to support warfighter needs and that our previous recommendations will help gain awareness of JIEDDO-funded initiatives as they are being developed, we continue to support our recommendation and reiterate the need for the integration of service requirements and full coordination prior to funding an initiative to ensure that these efforts are fully vetted throughout DOD before significant resources are committed. In commenting on our recommendation for JIEDDO to evaluate counter- IED training initiatives using the same criteria it uses to evaluate in- theater-based joint counter-IED urgent requirements, and incorporate this direction into existing guidance, DOD concurred with the intent but not the language of this recommendation. DOD noted that the JCAAMP provides the mechanism to identify, validate, and provide solutions for combatant commanders and service training counter-IED capability gaps. DOD also noted that it is currently developing a new DOD instruction on counter-IED training guidance. According to DOD’s comments, the instruction directs DOD components to implement counter-IED, mission- essential tasks across all levels of war into their training regiments at the individual, collective, unit, and staff levels, and sustain relevancy through interface with JIEDDO. While we recognize these actions may be a positive step towards improving coordination of training initiatives between JIEDDO and the services, neither the JCAAMP nor the instruction cited in DOD’s response to this report contain the criteria by which JIEDDO will fund counter-IED training initiatives. We, therefore, continue to support our recommendation and reiterate the need for establishing criteria specifying which counter-IED training initiatives JIEDDO will fund. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To assess the extent to which capability gaps were initially identified in DOD’s effort to defeat IEDs and how these gaps and other factors led to the development of JIEDDO, we spoke with current and former senior officials involved in the evolution of JIEDDO and examined existing documentation. To assess initial DOD efforts to defeat IEDs and the early evolution of JIEDDO, we met with officials from JIEDDO, the Army Asymmetric Warfare Office, the Marine Corps Warfighting Laboratory, the Office of the Secretary of Defense, and other current and former DOD officials involved in the establishment of JIEDDO. We also examined documentation including DOD Directive 2000.19E, which established JIEDDO, and documentation and briefings relating to JIEDDO’s evolution. To assess DOD’s efforts to implement a process for establishing new joint organizations, we met with officials from the Office of the Secretary of Defense, Office of the Director of Administration and Management, to examine documentation and conduct interviews on the implementation of the Joint Task Assignment Process and its relevance to JIEDDO. To assess the extent to which JIEDDO has maintained visibility over all counter-IED efforts, we met with officials from JIEDDO, the Army Asymmetric Warfare Office, the Marine Corps Warfighting Laboratory, the Army’s Research Development and Engineering Command, and CENTCOM to discuss current efforts to gain visibility over all of DOD’s counter-IED efforts. We also examined documentation including DOD Directive 2000.19E and JIEDDO Instruction 5000.01, which established JIEDDO’s rapid acquisition process, as well as analyzed JIEDDO, service, and other DOD counter-IED databases. To assess the extent to which JIEDDO has coordinated the transition of JIEDDO-funded initiatives to the military services, we met with officials from JIEDDO, the Army’s Combined Arms Center, the Army Asymmetric Warfare Office, the Marine Corps Warfighting Laboratory, and the Navy’s CREW Office. We also examined documentation including JIEDDO Instruction 5000.01, JIEDDO’s annual reports, and DOD Directive 5101.14, which designated the Secretary of the Navy as the Executive Agent for CREW and authorized the Secretary of the Navy to designate a Single Manager for CREW. To assess the extent to which JIEDDO has developed criteria for the counter-IED training initiatives it will fund, we met with officials from organizations including JIEDDO, the JIEDDO Joint Center of Excellence, and CENTCOM. We also examined documentation including DOD Directive 2000.19E and other relevant documents and briefings, such as published criteria for accepting counter-IED Joint Operational Urgent Needs from JIEDDO, the services, and other DOD entities. We conducted this performance audit from June 2008 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, the following individuals made contributions to this report: Cary Russell, Assistant Director; Grace Coleman; Kevin Craw; Will Horton; Ronald La Due Lake; James Lloyd; Gregory Marchand; Lonnie McAllister; Jason Pogacnik; Michael Shaughnessy; and Yong Song.
Prior to the Joint Improvised Explosive Device Defeat Organization's (JIEDDO) establishment in 2006, no single entity was responsible for coordinating the Department of Defense's (DOD) counter improvised explosive device (IED) efforts. JIEDDO was established to coordinate and focus all counter-IED efforts, including ongoing research and development, throughout DOD. This report, which is one in a series of congressionally mandated GAO reports related to JIEDDO's management and operations, assesses the extent to which 1) capability gaps were initially identified in DOD's effort to defeat IEDs and how these gaps and other factors led to the development of JIEDDO, 2) JIEDDO has maintained visibility over all counter-IED efforts, 3) JIEDDO has coordinated the transition of JIEDDO-funded initiatives to the military services, and 4) JIEDDO has developed criteria for the counter-IED training initiatives it will fund. To address these objectives, GAO reviewed and analyzed relevant documents and met with DOD and service officials. With the escalation of the IED threat in Iraq, DOD identified several counter-IED capability gaps that included shortcomings in the areas of counter-IED technologies, qualified personnel with expertise in counter-IED tactics, training, dedicated funding, and expedited acquisition processes. For example, prior to JIEDDO's establishment, many different DOD entities focused on counter-IED issues, but coordination among these various efforts was informal and ad hoc. DOD's efforts to focus on addressing these gaps culminated in the creation of JIEDDO, but its creation was done in the absence of DOD having formal guidance for establishing joint organizations. Further, DOD did not systematically evaluate all preexisting counter-IED resources to determine whether other entities were engaged in similar efforts. JIEDDO and the services lack full visibility over counter-IED initiatives throughout DOD. First, JIEDDO and the services lack a comprehensive database of all existing counter-IED initiatives, limiting their visibility over counter-IED efforts across DOD. Although JIEDDO is currently developing a management system that will track initiatives as they move through JIEDDO's acquisition process, the system will only track JIEDDO-funded initiatives--not those being independently developed and procured by the services and other DOD components. Second, the services lack full visibility over those JIEDDO-funded initiatives that bypass JIEDDO's acquisition process. With limited visibility, both JIEDDO and the services are at risk of duplicating efforts. JIEDDO faces difficulties with transitioning Joint IED defeat initiatives to the military services, in part because JIEDDO and the services have difficulty resolving the gap between JIEDDO's transition timeline and DOD's base budget cycle. As a result, the services are mainly funding initiatives with funding for overseas contingency operations rather than their base budgets. Continuing to fund transferred initiatives with overseas contingency operations appropriations does not ensure funding availability for those initiatives in future years since these appropriations are not necessarily renewed from one year to the next. This transition is also hindered when service requirements are not fully considered during the development of joint-funded counter-IED initiatives, as evidenced by two counter-IED jamming systems. As a result, JIEDDO may be investing in counter-IED solutions that do not fully meet existing service requirements. JIEDDO's lack of clear criteria for the counter-IED training initiatives it will fund has affected its counter-IED training investment decisions. As a result, JIEDDO has funded training initiatives that may have primary uses other than defeating IEDs. In March 2009, JIEDDO attempted to update its criteria for joint training initiatives by listing new requirements; however, these guidelines also could be broadly interpreted. Without specific criteria for counter-IED training initiatives, DOD may find that it lacks funding for future initiatives more directly related to the counter-IED mission.